Visibility into AI’s downside is on the upswing in corporations’ annual SEC filings. Seventy-six percent of S&P 500 companies added or expanded descriptions of AI as a material risk in their 2025 annual disclosure filings, according to an Autonomy Institute AI risk disclosure report.
Three years into the generative AI revolution, AI risk disclosures are in the spotlight. “Companies risk being the outlier if not mentioning AI in filings,” Goodwin partner Kaitlin Betancourt told the Cybersecurity Law Report.
Betancourt commented on the top AI concerns as well as steps companies should take to plan for AI risk disclosures. Across the business world and a top AI concern, “the cyber threat is really front and center” for executive teams, she observed. “I’ve heard AI described as an arms race, but I see two races. One is the competitive race among businesses. The other race is good versus evil, with nefarious actors absolutely capitalizing on AI” for more ways to attack, she added.
As for preparatory measures to take in advance of disclosures, she notes that “the infrastructure to manage AI adoption holistically and support disclosure statements can be difficult to put in place.” Disclosures, she emphasized, should be supported by a comprehensive AI governance program: “If the proper infrastructure is put around AI usage and there are processes and checks and balances, then issues are more likely to be flagged for a risk factor and fleshed out.” She recommended that companies take steps to establish an underlying AI governance program, and that buy-in from executive leadership is important in driving such a program forward.
The Cybersecurity Law Report article and Betancourt’s insights provide practical takeaways as companies consider and evaluate the sufficiency of their public disclosures.