Alert
April 12, 2023

US Artificial Intelligence Regulations: Watch List for 2023

An overview of the landscape for US regulation of AI technology

Companies are developing, deploying, and interacting with artificial intelligence (AI) technologies more than ever. At Goodwin, we are keeping a close eye on any regulations that may affect companies operating in this cutting-edge space.

For companies operating in Europe, the landscape is governed by a number of in force and pending EU legislative acts, most notably the EU AI Act, which is expected to be passed later this year; it was covered in our prior client alert here: EU Technology Regulation: Watch List for 2023 and Beyond. The United Kingdom has recently indicated that it may take a different approach, as discussed in our client alert on the proposed framework for AI regulation in the United Kingdom here: Overview of the UK Government’s AI White Paper.

For companies operating in the United States, the landscape of AI regulation remains less clear. To date, there has been no serious consideration of a US analog to the EU AI Act or any sweeping federal legislation to govern the use of AI, nor is there any substantial state legislation in force (although there are state privacy laws that may extend to AI systems that process certain types of personal data).

That said, we have recently seen certain preliminary and sector-specific activity that gives clues about how the US federal government is thinking about AI and how it may look to govern it in the future. Specifically, the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the Food and Drug Administration (FDA) have all provided recent guidance. This client alert reviews this activity and is important reading for any business implementing or planning to implement AI technologies in the United States.

NIST

On January 26, 2023, NIST, an agency of the US Department of Commerce, released its Artificial Intelligence Risk Management Framework 1.0 (the RMF), as a voluntary, non-sector-specific, use-case-agnostic guide for technology companies that are designing, developing, deploying, or using AI systems to help manage the many risks of AI. Beyond risk management, the RMF seeks to promote trustworthy and responsible development and use of AI systems.

As the federal AI standards coordinator, NIST works with government and industry leaders both in the United States and internationally to develop technical standards to promote the adoption of AI, enumerated in the “Technical AI Standards” section on its website. In addition, Section 5301 of the National Defense Authorization Act for Fiscal Year 2021 directed NIST to develop a voluntary risk management framework for trustworthy AI systems, the RMF. Although the RMF is voluntary, it does provide good insights into the considerations the federal government is likely to take into account in any future regulation of AI and, as it evolves, it could eventually be adopted as an industry standard. We summarize the key aspects below.

A key recognition by the RMF is that humans typically assume AI systems are objective and high functioning. This assumption can inadvertently cause harm to people, communities, organizations, or broader ecosystems, including the environment. Enhancing the trustworthiness of an AI system can help mitigate the risk of this harm. The RMF defines trustworthiness as having seven defined characteristics:

  1. Safe: providing real-time monitoring, backstops, or other intervention of the AI system to prevent physical or psychological harm, or endangerment of human life, health, or property
  2. Secure and resilient: employing protocols to avoid, protect against, or respond to attacks against the AI system, and withstanding adverse events
  3. Explainable and interpretable: understanding and properly contextualizing the mechanisms of an AI system as well as its output
  4. Privacy-enhanced: safeguarding human autonomy by protecting anonymity, confidentiality, and control
  5. Fair, with harmful bias managed: promoting equity and equality and managing systemic, computational and statistical, and human-cognitive biases
  6. Accountable and transparent: making information available about the AI system to individuals interacting with it at various stages of the AI life cycle and maintaining organizational practices and governance to reduce potential harms
  7. Valid and reliable: demonstrating through ongoing testing or monitoring to confirm the AI system performs as intended

The RMF also notes that AI systems are subject to certain unique risks, such as the following:

  • The use of personal data may subject AI companies to state privacy laws or other enhanced privacy risks due to AI’s data aggregation capabilities
  • Training data sets may be subject to copyright protection
  • Data quality issues (including inaccurate, incomplete, or biased data) can affect the trustworthiness of AI systems
  • There is a lack of consensus on robust and verifiable measurement methods and metrics

The RMF outlines four key functions to employ throughout the AI system’s life cycle to manage risk and breaks down these core functions into further subcategory functions. The RMF’s companion Playbook suggests the following action items to help companies implement these core functions:

  1. Map: collect sufficient knowledge about an AI system to inform organizational decisions to design, develop, or deploy it
  2. Measure: implement testing, evaluations, verifications, and validation processes to inform management decisions
  3. Govern: develop an organizational culture that incorporates AI risk management in its policies and operations, effectively implements them, and encourages accountability and diversity, equity, and inclusion
  4. Manage: monitor and prioritize AI system risks and respond to and recover from risk incidents

FTC

In addition to NIST’s release of the RMF, there has been some recent guidance from other bodies within the federal government. For example, the FTC has suggested it may soon increase its scrutiny on businesses that use AI. Notably, the FTC has recently issued various blog posts warning businesses to avoid unfair or misleading practices, including “Keep your AI claims in check” and “Chatbots, deepfakes, and voice clones: AI deception for sale.”

FDA

For companies interested in using  AI technologies for healthcare-related decision making, the FDA has also announced its intention to regulate many AI-powered clinical decision support tools as devices. More information on those regulations can be found in our prior client alert available here: FDA Issues Final Clinical Decision Support Software Guidance.

While the above detailed recent actions from NIST, FTC, and FDA do provide some breadcrumbs related to what future US AI regulation may look like, there is no question that, at the moment, there are few hard and fast rules that US AI companies can look to in order to guide their conduct. It seems inevitable that regulation in some form will eventually emerge, but when that will occur is anybody’s guess. Goodwin will continue to follow the developments and publish updates as they become available.

UPDATE: On April 13, 2022, the day after this alert was initially published, reports surfaced that US Senator Chuck Schumer is leading a congressional effort to establish US regulations on AI Reports indicated that Schumer has developed a framework for regulation that is currently being shared with and refined with the input of industry experts. Few details of the framework were initially available, but reports indicate that the regulations will focus on four guardrails: (1) identification of who trained the algorithm and who its intended audience is, (2) disclosure of its data source, (3) an explanation for how it arrives at its responses, and (4) transparent and strong ethical boundaries. (See: Scoop: Schumer lays groundwork for Congress to regulate AI (axios.com)). There is no clear timeline yet for when this framework may become established law, or if that will occur at all, but Goodwin will continue to track developments and publish alerts as they become available.

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.