On September 29, 2025, California Governor Gavin Newsom signed into law the first frontier artificial intelligence (AI) safety legislation in the nation, Senate Bill (SB) 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). SB 53 will take effect January 1, 2026. Coming on the heels of 2024’s SB 1047, a stricter version of the new law that passed the California legislature but was vetoed by the governor, SB 53 establishes state-level oversight of the use, assessment, and governance of advanced AI systems by requiring the most advanced AI developers to publish AI safety frameworks and transparency reports; share their catastrophic risk assessments as well as create new mechanisms to report critical safety incidents; and extend whistleblower protections. By establishing this state-level oversight, SB 53 seeks to create greater transparency and accountability for the most advanced AI developers and ensure they are rigorously assessing, mitigating, and disclosing catastrophic risks before deploying their advanced AI models at scale.
SB 1047 would have also required advanced AI developers to build “kill switches” into their AI models, conduct safety testing, and comply with certain audit requirements. However, Governor Newsom vetoed the bill in September 2024 due to concerns that these requirements would stymie innovation by the state’s leading AI companies.
The passage of SB 53 in California arrives against a contentious backdrop, in which the federal government is intent to step on the gas pedal of AI innovation even as some states are hitting the brakes. In its July 2025 AI Action Plan, the Trump administration directed federal agencies with AI-related discretionary funding programs to limit funding to states with a regulatory climate that “may hinder the effectiveness of that funding or award.” It remains to be seen how that mandate will play out in connection with California’s TFAIA.
Whom Does SB 53 Cover?
SB 53 applies to “frontier developers,” which SB 53 defines as people who have “trained, or initiated the training of, a frontier model.” A “frontier model” is a foundation model using more than 10^26 integer or floating-point operations of computing power. A “foundation model” is an AI model “trained on a broad set of data […] designed for generality of output […] and adaptable to a wide range of distinctive tasks” — targeting the largest AI companies for compliance.
SB 53 imposes additional obligations on “large frontier developers,” which are frontier developers with annual gross revenues of more than $500 million.
The California Department of Technology is tasked with submitting recommendations annually to the legislature about whether and how to update the definitions of “frontier model,” “frontier developer,” and “large frontier developer” so “they accurately reflect technological developments, scientific literature, and widely accepted national and international standards” and continue to cover foundation models and frontier developers at the frontier of AI development as well as well-resourced frontier developers.
In making these recommendations, the Department of Technology is to consider the following:
- “Similar thresholds used in international standards or federal law, guidance, or regulations” for catastrophic risk management.
- “Input from stakeholders, including academics, industry, the open-source community, and governmental entities.”
- “The extent to which a person [can] determine” if they are a frontier developer or a large frontier developer.
- “The complexity of determining whether a person or foundation model is covered.”
- “The external verifiability of determining whether a person or foundation model is covered.”
What Is “Catastrophic Risk”?
SB 53 is particularly focused on mitigating catastrophic risk — as advanced AI models develop increasingly advanced capabilities and begin to be entrusted with control over essential, high-risk systems — rather than more immediate risks such as bias, misinformation, or privacy concerns.
SB 53 defines “catastrophic risk” as “a foreseeable and material risk that a frontier developer’s” use “of a frontier model” could cause “the death of, or serious injury to, more than 50 people,” or more than $1 billion in property damage, in a single incident. This includes scenarios such as the model aiding in the creation or release of chemical, biological, radiological, or nuclear weapons, engaging in cyberattacks or serious crimes without “meaningful human oversight,” or evading developer control. However, risks from publicly accessible information, lawful federal government activity, or harm when the model did not materially contribute are excluded from this definition.
What Are Frontier Developers Required to Do Under SB 53?
Transparency Reports
SB 53 requires a frontier developer “deploying a new frontier model or a substantially modified version of an existing frontier model” to publish a transparency report on its website that includes a mechanism for users to communicate with the frontier developer, the model’s release date, the languages and modalities of output it supports, its intended uses, and any generally applicable restrictions or conditions on its use.
In addition to the transparency report information, a large frontier developer must include summaries of assessments of catastrophic risks conducted as part of the large frontier developer’s frontier AI framework, the assessments’ results, descriptions of third-party involvement, and other steps taken to fulfill the requirements of the frontier AI framework.
Frontier developers can publish the transparency report’s required information as part of a larger document such as a system card or model card.
Critical Safety Incident Reporting
SB 53 also tasks the California Governor’s Office of Emergency Services (Cal OES) with establishing a mechanism for frontier developers and the public to report critical safety incidents. Critical safety incidents include the “unauthorized access to, modification of, or exfiltration of model weights […] that results in death, bodily injury,” or harm resulting from a catastrophic risk; “loss of control of a foundation model causing death or bodily injury”; and the use of deceptive techniques by a frontier model to subvert the controls or monitoring of its developer “in a manner that demonstrates materially increased catastrophic risk.”
Critical safety incident reports will include:
- The date of the critical safety incident.
- The reasons the incident qualifies as a critical safety incident.
- A short and plain statement describing the critical safety incident.
- Whether the incident was associated with internal use of a frontier model. (SB 53)
Frontier developers must report any critical safety incidents to Cal OES within 15 days of discovery or within 24 hours if “a critical safety incident poses an imminent risk of death or serious physical injury.”
Cal OES is responsible for reviewing these critical safety incident reports submitted by frontier developers and members of the public. The attorney general or Cal OES may share these critical safety incident reports with the legislature, the governor, the federal government, or appropriate state agencies but must consider any risks related to the frontier developer’s trade secrets or cybersecurity, public safety, or national security. Additionally, beginning January 1, 2027, Cal OES is required to produce an annual report with anonymized and aggregated information about the reported critical safety incidents that does not include any information “that would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that would be prohibited by any federal or state law.”
Whistleblower Protections
SB 53 creates several protections for whistleblowers who report “that a frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated the TFAIA.”
Anti-Retaliation
A frontier developer cannot adopt or enforce a rule, regulation, policy, or contract “that prevents a covered employee, as defined, from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee […] if the covered employee has reasonable cause to believe that […] the frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer has violated the TFAIA.”
Notice of Covered Employees’ Rights and Responsibilities
A frontier developer must also “provide a clear notice to all covered employees of their rights and responsibilities” under SB 53 either by displaying a public notice in the workplace or providing written notice at least once each year, ensuring that covered employees have received and acknowledged the written notice. A frontier developer must also ensure that new covered employees and those working remotely periodically receive an equivalent notice.
Internal Disclosure Process
A large frontier developer must provide a reasonable internal process for a covered employee to “anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the […] large frontier developer’s activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated” the TFAIA.
The internal process must also include monthly updates, “regarding the status of the large frontier developer’s investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure,” to the person who made the disclosure. The disclosures and responses of the process must be shared quarterly with the large frontier developer’s officers and directors except when “a covered employee has alleged wrongdoing by an officer or director.”
Enforcement
A covered employee may bring a civil action against the frontier developer if the covered employee can demonstrate “by a preponderance of the evidence” that the large frontier developer engaged in a proscribed activity that “was a contributing factor in the alleged prohibited action against the covered employee.” The frontier developer has “the burden of proof to demonstrate by clear and convincing evidence that the alleged action would have occurred for legitimate, independent reasons even if the covered employee had not engaged in activities protected by this section.” The court is authorized to award reasonable attorney fees to a plaintiff who brings a successful action and can grant temporary or preliminary injunctive relief if “reasonable cause exists to believe a violation has occurred.”
SB 53 also tasks the attorney general with producing an annual report beginning January 1, 2027, with “anonymized and aggregated information about reports from covered employees” that does not include any information that “would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that would be prohibited by any federal or state law.”
What Additional Obligations Does SB 53 Impose on Large Frontier Developers?
Frontier AI Frameworks
SB 53 requires a large frontier developer to create, implement, comply with, and publish on its website a frontier AI framework that applies to its frontier models. The frontier AI framework must describe how the company approaches the following:
- Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework.
- Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk. […]
- Applying mitigations to address the potential for catastrophic risks […].
- Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
- Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
- Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures […].
- Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties.
- Identifying and responding to critical safety incidents.
- Instituting internal governance practices to ensure implementation of these processes.
- Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms. (SB 53)
Large frontier developers must review and update their frontier AI frameworks at least once a year. However, “if a large frontier developer makes a material modification to its frontier AI framework,” it must publicly “publish the modified frontier AI framework and a justification for that modification within 30 days.”
Catastrophic Risk Assessments
SB 53 tasks the Cal OES with establishing a mechanism for large frontier developers “to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use” of their frontier models. Large frontier developers are required to submit these summaries to Cal OES every three months. Large frontier developers are permitted to redact these summaries to protect their trade secrets and cybersecurity as well as public safety or national security or to comply with any federal or state law. Additionally, Cal OES must take all the necessary precautions to ensure access to these reports is on a need-to-know basis and protect the reports from unauthorized access.
SB 53 Enforcement
If a large frontier developer fails to publish, transmit, or report information required under SB 53; makes a materially false or misleading statement about catastrophic risk from its frontier models, its management of catastrophic risk, and its implementation of or compliance with its frontier AI framework; or fails to comply with its own frontier AI framework, the attorney general can bring a civil action against the large frontier developer for civil penalties of up to $1 million per violation depending on its severity.
Conclusion
California’s SB 53 marks a significant step-up in the regulation and oversight of advanced AI models. By establishing robust transparency, safety, and whistleblower protections, the law sets a new standard for responsible AI development and deployment applicable to the developers of the largest, most impactful models as well as a national benchmark for responsible AI governance and risk mitigation.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/t/tene-omer

Omer Tene
Partner - /en/people/w/withers-bethany

Bethany P. Withers
PartnerChair, AI & Machine Learning - /en/people/s/surampudi-tayjus

Tayjus Surampudi
Associate