On December 9, 2025, the Financial Industry Regulatory Authority (FINRA) released its “2026 FINRA Annual Regulatory Oversight Report” (the Report), spotlighting generative artificial intelligence (AI) and cybersecurity risks as key areas of concern for 2026. In this alert, we unpack the Report and offer insights for firms seeking to capture the benefits of new technologies while avoiding compliance and enforcement pitfalls.
The Report, which details insight into findings from FINRA’s regulatory operations, serves as a key signal to member firms as to how FINRA will focus its resources during the upcoming exam cycle. The Report significantly expands FINRA’s discussion of risks related to its member firms’ implementation of generative AI compared to years prior. While last year’s report addressed generative AI at a high level, this year’s has an entire section devoted to generative AI–related risk. FINRA presents this guidance alongside its discussion of emerging cybersecurity risks and recommended cybersecurity best practices. In doing so, the Report indicates the potential for increased FINRA supervision of its member firms’ management of these interrelated risk areas.
While reaffirming FINRA’s commitment to technical neutrality, the Report categorizes the generative AI use cases most commonly observed among member firms, offers recommendations on generative AI governance, and addresses newly emerging risks, such as the use of AI agents and generative AI–enhanced cyberattacks. Throughout the Report, FINRA emphasizes the importance of continued human involvement and oversight in managing generative AI–related risks.
Governance and Supervision
The Report emphasizes that member firms must address compliance with applicable laws and regulations, including FINRA rules, prior to testing and deploying generative AI within their business environments. To meet this expectation, FINRA recommends member firms develop a well-documented generative AI risk management program. Importantly, member firms should recognize that the use of generative AI does not occur in a vacuum. Instead, it works hand in hand with other key risk areas from the Report, including cybersecurity and vendor due diligence. Per the Report, compliant generative AI programs should include:
- Clear policies and procedures that firms can use for the development, implementation, use, and monitoring of generative AI, supported by comprehensive documentation of these efforts.
- Supervisory processes to address enterprise-level generative AI use, including processes to identify and mitigate AI-related risk such as hallucinations and bias. These supervisory processes should also address whether the member firm’s existing cybersecurity governance program adequately assesses security risks arising from both firm and vendor use of generative AI, as well as how threat actors might use generative AI against the firm.
- Generative AI–focused vendor diligence, including assessments of how third-party vendors use generative AI as part of the member firm’s vendor due diligence process. FINRA also recommends ensuring that vendor contracts comply with regulatory obligations related to generative AI use, such as prohibiting the ingestion of sensitive data into open-source generative AI tools.
- Formal review and approval processes to evaluate generative AI opportunities as well as implement necessary controls to help manage unique generative AI–related risks, developed in coordination with business and technology experts.
- Robust testing of generative AI models to understand their capabilities and limitations and assess and mitigate risks related to data privacy, integrity, reliability, and accuracy.
- Ongoing monitoring of prompts, responses, and outputs to affirm that deployed generative AI tools operate as intended and in a compliant manner (e.g., prompt and output logging, model version tracking, and human-in-the-loop validation).
Trending Use Cases for Generative AI
The Report outlines FINRA’s categorization of generative AI use cases observed among member firms. FINRA explains that the following categories are intended to establish a shared terminology:
- Summarization and Information Extraction: The use of generative AI to condense “large volumes of text” and extract “specific entities, relationships, or key information from unstructured documents.”
- Conversational AI and Question Answering: The use of generative AI to provide “interactive, natural language and responses to user queries through chatbots, virtual assistants, and voice interfaces.”
- Sentiment Analysis: The use of generative AI to “assess the tone of text as positive, neutral, or negative.”
- Translation: The use of generative AI to translate “text between supported languages” and “convert audio to text or vice versa.”
- Content Generation and Drafting: The use of generative AI to create “a variety of written content, including documents, reports, marketing materials, and other resources.”
- Classification and Categorization: The use of generative AI to “automatically sort, label, and organiz[e] data, documents, or transactions into predefined categories or groups.”
- Workflow Automation and Process Intelligence: The use of generative AI to optimize “business processes through intelligent routing, automation, and agents.”
- Coding: The use of generative AI to generate “functional code for specified inputs and output objectives.”
- Query: The use of generative AI to retrieve “results from structured databases with natural language.”
- Synthetic Data Generation: The use of generative AI to create “artificial datasets that resemble real-world data but are generated by computer algorithms or models rather than being collected from actual observations or measurements.”
- Personalization and Recommendation: The use of generative AI to tailor “products, services, or content to customer preferences and circumstances.”
- Analysis and Pattern Recognition: The use of generative AI to identify “trends, correlations, or anomalies in complex datasets to generate insights and predictions.” This includes the use of generative AI for “the identification of threat activity of adversaries.”
- Data Transformation: The use of generative AI to convert “unstructured data into standardized formats.”
- Modeling and Simulation: The use of generative AI to develop automated “financial modeling, forecasting, scenario creation, and simulations.”
The Report observes a trend of member firms implementing generative AI solutions to improve efficiency, particularly for internal processes and information retrieval. Consistent with this trend, FINRA identifies “summarization and information extraction” as the top observed generative AI use case among member firms.
Newly Emerging Risks Related to Generative AI Agents
FINRA also notes the expanding use of AI agents by member firms and its associated risks. The Report defines AI agents as “systems or programs that are capable of autonomously performing and completing tasks on behalf of a user.” FINRA highlights several risks associated with these agents, including the possibility that they may operate beyond their intended scope or authority, rely on complex processes that are difficult to audit, lack transparency, mishandle sensitive information, or lack sufficient domain expertise for complex tasks. FINRA recommends that member firms exploring the use of AI agents carefully assess these risks and implement appropriate mitigation measures.
Generative AI–Enabled Cyberattacks
The Report highlights generative AI–enabled fraud as a growing threat to firms. The Report describes generative AI–enabled fraud as threat actors’ exploitation of generative AI to enhance their capabilities for cyber-related crimes. According to the Report, observed examples of generative AI–enabled fraud include the development of:
- fake content, such as deepfakes, spoofed websites, and fraudulent documents;
- polymorphic malware that can evade detection by security programs; and
- malicious tools that threat actors lacking deep knowledge or skills can leverage to conduct cyberattacks and cyber-enabled scams.
The Report further observes that threat actors use generative AI to facilitate account takeovers and business email compromise attacks. These activities include using generative AI to gather intelligence on targets, generate personalized phishing messages, clone investor voices, create deepfake images, and produce fraudulent documents.
Takeaways
The Report signals that FINRA has sharpened its focus on generative AI and cybersecurity risks and expects member firms to address them proactively through governance and risk management program enhancements. Additional key takeaways for member firms are:
- FINRA seeks to establish a shared terminology for generative AI use cases, reflecting increased supervisory attention and standardization.
- Threat actors are using generative AI to increase the sophistication of cyberattacks, reinforcing the need to integrate generative AI risk into existing cybersecurity programs.
- Member firms should implement formal generative AI governance and risk management programs and ensure these programs are in place before deploying generative AI tools.
- Human oversight remains central to generative AI risk management, particularly for decision-making, output validation, and accountability.
- Member firms should assess generative AI risks associated with third-party vendors, in addition to their own use, and update vendor agreements as needed to ensure regulatory compliance.
FINRA’s focus on generative AI and cybersecurity risks, as well as the supervisory and governance responsibilities that organizations take on when adopting new technologies, is aligned with a growing body of guidance from regulators and standard-setting bodies. With expectations rising, firms are well served to review and mature their existing programs to factor in emerging risks.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/b/betancourt-kaitlin

Kaitlin Betancourt
Partner - /en/people/h/hecht-jonathan

Jonathan H. Hecht
Partner - /en/people/w/welle-jud

L. Judson Welle
Partner - /en/people/w/withers-bethany

Bethany P. Withers
PartnerChair, AI & Machine Learning - /en/people/g/grobbel-christopher

Christopher Grobbel
Counsel - /en/people/l/lee-jacob

Jacob T. Lee
Associate