Artificial intelligence (AI) and algorithmic pricing tools are transforming competition law enforcement at an unprecedented pace. What competition authorities once dismissed as theoretical concerns have become the focus of major enforcement actions across the US, EU, and UK, resulting in fines valued in the hundreds of millions.
For companies deploying algorithmic pricing systems, the message is clear: Traditional antitrust principles apply with full force to AI-driven conduct. Enforcement actions demonstrate that authorities will pursue algorithmic collusion, algorithmic resale price maintenance, and algorithmic self-preferencing as aggressively as their analogue predecessors. At the same time, a new frontier is emerging around AI-powered compliance tools that may inadvertently create additional regulatory exposure.
This alert examines recent enforcement trends, analyses how established legal principles apply to algorithmic conduct, and provides practical guidance for companies navigating this rapidly evolving landscape.
The Evolution of Algorithmic Pricing: Market Transparency and Coordination Risk
Digital markets enable unprecedented price transparency. Real-time price monitoring that once required manual effort now occurs automatically through web scraping and data aggregation. While economic theory suggests transparency should benefit consumers through easier price comparison, empirical evidence reveals a more complex reality.
Research on German and Chilean fuel markets found that algorithmic pricing adoption correlated with 9% margin increases, suggesting that transparency facilitated tacit coordination rather than intensified competition. When all competitors can instantly observe and respond to each other’s pricing moves, the result may be algorithmic parallelism that harms consumers while remaining facially independent.
When analysing algorithmic pricing cases, one can identify three distinct scenarios, each presenting different legal challenges.
First, traditional cartels may use algorithms to implement preexisting human agreements to fix prices. Here, the algorithm serves merely as a tool for executing a conspiracy; the criminal intent remains decidedly human. This scenario fits comfortably within existing legal frameworks because the core requirement of agreement or concerted practice is clearly satisfied.
Second, hub-and-spoke coordination can emerge when multiple competitors unknowingly coordinate by using the same algorithmic pricing service. When competitors effectively delegate pricing decisions to a common algorithm, coordination may emerge without direct communication among the competitors themselves. Similar dynamics appear in rideshare platforms, which algorithmically set fares for all drivers, and in third-party repricing software used by multiple e-commerce sellers.
Third, autonomous algorithmic collusion occurs when AI systems trained merely to maximise profits independently discover that coordination produces better outcomes than competition. Through reinforcement learning, algorithms can converge on supracompetitive prices without any human instruction to collude. This scenario presents a fundamental challenge to competition law, which has historically required some form of agreement or concerted practice between competitors.
Lessons From Recent Cases
UK: Amazon Marketplace Sellers
In 2016, the UK Competition and Markets Authority (CMA) pursued two sellers of celebrity posters on Amazon’s UK marketplace. The companies had agreed not to undercut each other and programmed their repricing software to implement this arrangement, specifically instructing their algorithms to undercut all rivals except one another. When the software occasionally malfunctioned, employees exchanged emails that made the coordination explicit: “Presume your software is broken, so had to remove you from ignore list,” complained one employee. “You just switching ignore each time is not doing either of us any good …” read another message.
The case demonstrates that algorithmic implementation provides no shield from liability when underlying human agreement exists. More critically, for compliance purposes, it shows that even companies using sophisticated automation resort to human communications when systems malfunction, creating the documentary evidence that remains central to enforcement. Companies deploying repricing software must ensure employees understand that discussions about competitor treatment create significant antitrust exposure.
EU: Consumer Electronics Manufacturers
The European Commission and Dutch competition authority brought multiple cases against major consumer electronics manufacturers including Asus, Denon & Marantz, Philips, Pioneer, and Samsung. These companies had deployed sophisticated monitoring software that continuously tracked online retailer pricing. When retailers discounted items below recommended levels, manufacturers intervened with alleged threats or sanctions to force prices upward.
The European Commission extracted more than 110 million euros in fines, while the Dutch authority imposed 40 million euros on Samsung alone. The cases establish that monitoring technology combined with enforcement against discounters constitutes resale price maintenance regardless of the algorithmic nature of the monitoring. Moreover, because retailers themselves often use algorithms to automatically match competitor prices, restrictions imposed on one low-price retailer cascade throughout the market, amplifying anticompetitive effects.
These cases carry particular significance for manufacturers in digital distribution channels. The fact that price monitoring occurs through automated web scraping rather than manual surveillance does not insulate the conduct from scrutiny. If the monitoring feeds into a system of maintaining prices through pressure on retailers, the full apparatus becomes suspect.
EU: Google Search
Perhaps no case better illustrates algorithmic liability for dominant firms than the European Commission’s Google Search decision, which resulted in a 2.4 billion euro fine subsequently upheld by EU courts. At the heart of the case was Google’s algorithm for ranking search results, which systematically demoted competing comparison shopping services while favouring Google’s own service.
The statistics were stark: Users clicked first-page results 95% of the time, meaning algorithmic demotion to page four constituted a commercial death sentence. The case established that dominant firms bear full responsibility for how their algorithms affect competition. Algorithmic decision-making does not eliminate liability for self-preferencing or exclusionary conduct. When algorithms shape consumer choice and market outcomes, their design and implementation must comply with competition law.
How Competition Law Applies to Algorithms
Article 101 TFEU: The Agreement Requirement
European competition law prohibits agreements and concerted practices that restrict competition. The European Court of Justice has consistently held that economic operators must independently determine their conduct on the market, precluding direct or indirect contact that influences competitive behaviour.
The Eturas case, which involved a Lithuanian travel booking platform that sent messages to travel agencies about discount caps, established the framework for proving algorithmic coordination. Competition authorities can establish a concerted practice when objective evidence demonstrates that parties were aware of a common system, tacitly assented to it, and failed to distance themselves publicly. Once the authority meets its initial burden, the evidential presumption shifts to accused parties to demonstrate innocence through systematic deviation from the coordination or public objection to it. Eturas is also instructive in redirecting analytical focus toward the design of the system itself: When a pricing rule is built into the technical infrastructure of a platform, the central legal inquiry becomes less about whether competitors exchanged information and more about whether the architecture as a whole serves to enforce a shared commercial constraint.
This framework works reasonably well for hub-and-spoke scenarios in which competitors knowingly adopt common algorithmic tools. However, it creates a significant enforcement gap for autonomous algorithmic collusion. When AI systems independently converge on supracompetitive prices through reinforcement learning, without any human agreement or even awareness of coordination, Article 101 Treaty on the Functioning of the European Union (TFEU) finds no purchase. This represents the classic oligopoly problem in digital form: parallel conduct that harms consumers but cannot be classified as concertation. The legal framework, conceived in an era of smoke-filled rooms and handwritten notes, strains to reach conduct occurring entirely within silicon circuits.
Article 102 TFEU and the Digital Markets Act
For dominant firms, Article 102 TFEU prohibits abusing market power through exclusionary or exploitative conduct. The Google Search case confirms this applies fully to algorithmic self-preferencing and discriminatory ranking. The newly effective Digital Markets Act reinforces these obligations, requiring designated “gatekeepers” to apply transparent, fair, and nondiscriminatory conditions to ranking and refrain from self-preferencing.
An emerging question concerns AI-driven personalised pricing. While Article 102(c) TFEU prohibits price discrimination, that provision has historically addressed discrimination among trading partners rather than between end consumers. Moreover, Article 102 TFEU requires dominance, which most firms deploying personalised pricing lack. Other regulatory instruments including the General Data Protection Regulation, the Digital Services Act, the Consumer Rights Directive (amended by the Omnibus Directive, 2019/2161/EU), and the newly adopted AI Act fill some gaps, but their interplay remains uncertain. Companies implementing personalised pricing should monitor guidance as these frameworks develop.
US Antitrust Framework
US authorities apply similar principles under the Sherman Antitrust Act. Section 1 requires agreement or conspiracy, with standards that have traditionally demanded more explicit coordination evidence than the EU “concerted practice” doctrine. The RealPage litigation marks a significant articulation of US enforcement theory. The U.S. Department of Justice alleged that RealPage’s revenue management software relied on competitors’ nonpublic, competitively sensitive data and incorporated features designed to limit price decreases and align pricing among rival landlords. The proposed final judgment required RealPage to stop using competitors’ nonpublic data in runtime pricing, remove or redesign features that aligned prices or limited downward movement, and accept ongoing monitoring to ensure restored pricing independence.
Together, these measures underscore a central insight: Algorithmic coordination becomes anticompetitive not because price recommendations are generated, but because shared systems can implement pricing outcomes in ways that substitute for independent competitive judgment.
Recent case law points to a practical dividing line for section 1 analysis. The critical question is not whether competitors use shared data or algorithms in the abstract, but whether the software architecture operationalises coordination by, for example, ingesting competitors’ nonpublic, current data or embedding design features that steer users toward aligned pricing. When a platform centralises sensitive inputs and standardises outputs, enforcers increasingly view it as a functional coordination hub. By contrast, when tools offer nonbinding, overridable recommendations and do not commingle rivals’ nonpublic data, courts have been far less receptive to section 1 theories at the pleadings stage.
Section 2 addresses unilateral conduct by monopolists and applies to algorithmic self-preferencing and exclusion. However, US monopolisation standards generally require both monopoly power and exclusionary conduct, creating a higher bar than EU dominance standards. The Federal Trade Commission has signaled increased attention to algorithmic pricing and personalisation in its ongoing rulemaking and enforcement agenda.
Emerging Risk: AI-Powered Compliance Tools and Evidence Avoidance
A new category of legal technology tools has emerged that uses AI to scan outbound business communications and flag language that might later constitute competition law evidence. These systems analyse draft emails and messages in real time, alerting senders when content references competitor discussions, pricing coordination, market allocation, or other antitrust red flags. The technology enables senders to rephrase potentially problematic language before communications leave the organisation.
Research confirms these systems work effectively. Even freely available language models can identify communications that create antitrust risk and suggest sanitised alternatives. The technology is neither expensive nor difficult to deploy, making it accessible to organisations of all sizes.
Email evidence has been central to virtually every major cartel prosecution over the past two decades. From London Interbank Offered Rate manipulation to truck cartel cases to allegations against technology companies, the documentary record has provided the factual foundation for enforcement. If AI systems systematically eliminate incriminating evidence before it is created, the implications cascade throughout the enforcement ecosystem.
Discovery costs multiply as authorities must deploy more resource-intensive investigative techniques to compensate for vanished documentary evidence. Dawn raids, witness interviews, forensic analysis of deleted files, and economic modeling become necessary in cases that previously would have turned on clear email trails. Facts become harder to establish when the documentary record has been systematically cleansed of candour. Economic theory must increasingly substitute for direct evidence, turning every case into a battle between competing expert witnesses whose models and assumptions can be endlessly debated. Enforcement becomes prohibitively expensive except for the most egregious violations, while marginal anticompetitive conduct proliferates unchecked.
This is not mere speculation but the logical end point of technology whose explicit purpose is to help companies avoid leaving evidence of their conduct.
Reputational and Practical Risks
Beyond regulatory exposure, companies adopting evidence-avoidance tools could face significant reputational risks. When tool usage becomes public through litigation discovery, whistleblower disclosure, or journalistic investigation, the optics are decidedly poor. The revelation that a company systematically deployed AI to scrub its communications could create an inference of wrongdoing that may prove more damaging than whatever underlying conduct the tools were meant to obscure.
Moreover, these tools may prove less effective than anticipated. Sophisticated forensic analysis can often detect when communications have been systematically sanitized. Patterns in language, metadata anomalies, and testimony from employees about tool usage can reveal evidence suppression even when the underlying communications appear innocuous. The attempted cover-up may ultimately create more problems than it solves.
Emerging International Enforcement Landscape
The enforcement trajectory is already visible across multiple jurisdictions. Within the EU, national competition authorities are contributing their own perspectives, both through policy initiatives and enforcement action.
In early 2026, the Portuguese Autoridade da Concorrência published a paper examining competition issues associated with access to chips for training and running AI models, highlighting infrastructure-level concerns that sit upstream of pricing conduct itself. Meanwhile, the French Autorité de la concurrence launched a public consultation on AI agents, signaling that the competitive dynamics of agentic AI systems are becoming a distinct regulatory priority.
These policy developments sit alongside concrete enforcement activity. In September 2025, Poland’s antitrust authority confirmed that it was investigating potential collusion involving algorithmic pricing tools in the banking and pharmaceutical sectors. Earlier that year, the Netherlands Authority for Consumers and Markets launched a market investigation into algorithmic pricing practices in the airline industry, followed by Italy’s competition authority which indicated that it was engaging with the European Commission on ways to improve the price comparison of airline fares.
The UK appears likely to follow a similar trajectory. In September 2025, CMA Chief Executive Sarah Cardell stated that the authority is “watching and learning” from its “friends over in the US” as it intensifies scrutiny of how algorithms and generative AI may influence pricing behaviour. Reinforcing this direction, in its “Draft Annual Plan 2026 to 2027,” and as part of implementing its 2026 to 2029 strategy, the CMA has identified deterring algorithmic price collusion as a priority area. This increased focus has translated into concrete enforcement action.
On 24 February 2026, the CMA launched an investigation into suspected sharing of competitively sensitive information among competing hotel chains (Hilton, IHG Hotels, and Marriott) through the use of a hotel data analytics tool. In its press release, the CMA was careful to set out the broader context for this action. It acknowledged that companies use a wide range of data analytics tools and algorithms to support commercial decision-making, which can deliver significant benefits, including more intense competition, lower costs, and faster price adjustments to reflect changes in supply and demand. At the same time, the CMA emphasised that, when rival businesses share competitively sensitive information, whether directly or via a third-party data analytics provider, the uncertainty that normally exists between competitors is reduced. That reduction in uncertainty can weaken competitive pressure by making it easier for companies to predict one another’s behaviour and, ultimately, coordinate their conduct.
This enforcement action should, however, be read alongside the CMA’s broader and more nuanced position on algorithmic tools. The CMA published a research paper in 2021 examining the potential adverse effects of algorithmic pricing on competition and has continued to keep the use and impact of algorithms under its active review. Its guidance on horizontal agreements, published in August 2023, recognised that algorithms are not inherently anticompetitive. This position is reinforced in the CMA’s more recent publications, such as “Agentic AI and consumers” and “AI and collusion: frontiers, opportunities and challenges,” which were both released in March 2026. In these materials, the CMA goes further by emphasising that businesses (i) remain responsible for pricing and commercial outcomes shaped by AI systems; (ii) must take proactive steps to understand, test, and govern the technologies they deploy; and (iii) should audit input data and statistical methodologies, whether they are developed in-house or sourced from third parties.
In North America, the Canadian Competition Bureau published a report in January 2026 highlighting public feedback on algorithmic pricing and competition, reflecting growing attention to this issue across the region. The breadth of international scrutiny is further underscored by developments in the Asia-Pacific region. In October 2025, the Indian Competition Commission published a market study on AI and competition, addressing pricing-related practices arising in the AI market.
Together, these developments confirm that regulatory attention to algorithmic pricing conduct is not confined to any single jurisdiction.
Practical Compliance Guidance
Companies deploying algorithmic pricing should treat these systems as creating antitrust exposure requiring active management rather than as technical tools outside the scope of legal oversight. The starting point is comprehensive documentation. Organisations should maintain detailed records of algorithm design, including the inputs each algorithm considers, the business logic it applies, the outputs it generates, and the decision-making process used in its development. This documentation serves dual purposes: It enables internal compliance assessment and provides evidence of lawful intent should questions arise.
Regular auditing should examine whether algorithms could facilitate coordination either directly or through common platforms. When a third-party pricing tool is used, companies should understand how many competitors employ the same system and demand transparency about its operation. Hub-and-spoke coordination risks increase dramatically when multiple market participants delegate pricing to a common algorithm. Companies should consider whether customised solutions might reduce risk compared to off-the-shelf tools used industrywide.
For dominant firms or platforms, algorithm audits should specifically assess whether ranking or pricing algorithms systematically advantage the platform’s own services over competitors’, or whether they apply discriminatory conditions to different market participants. The Google Search precedent makes clear that algorithmic implementation provides no defence to self-preferencing by dominant firms.
Technical controls should be embedded into algorithmic design from inception rather than retrofitted after deployment. This “compliance by design” approach makes lawful conduct the default rather than an afterthought. For example, algorithms can be designed with technical guardrails that prevent them from responding to competitor signals or ensure their ranking decisions apply consistent criteria across competing services. Building such controls into initial architecture proves far more effective than attempting to audit and constrain systems after deployment.
Contemporary documentation of business decisions remains critical even in algorithmic contexts. Companies should maintain records demonstrating that pricing and other competitive decisions reflect independent business judgment rather than coordination signals. This may require technical documentation showing how algorithms process information and make decisions, supplemented by business records explaining strategic rationale. Such documentation proves invaluable if authorities question whether observed price parallelism reflects genuine independent decision-making or tacit coordination.
Employee training must address antitrust risks specific to algorithmic tools. Staff deploying or maintaining pricing algorithms should understand that discussions with competitors about algorithm design or operation create substantial risk. They should know that instructions to “ignore” specific competitors in repricing algorithms, or to coordinate pricing through a common platform, constitute serious violations. Clear policies should prohibit coordination even through intermediaries or shared technological infrastructure.
AI-Powered Compliance Tools: Proceed With Caution
Companies considering AI-powered compliance tools should carefully distinguish between systems that identify risks for remediation and those designed to eliminate evidence. Tools that provide post-sending alerts to legal departments, integrate with voluntary disclosure programs, or create escalation procedures for identified risks can support genuine compliance. These systems help organisations identify potential violations so they can be investigated and corrected before causing harm.
In contrast, systems that scan outbound communications and enable real-time rewriting to eliminate potentially problematic language before messages are sent serve as evidence avoidance rather than substantive compliance. The regulatory and reputational risks of such systems likely exceed any litigation benefits they might provide. Early adopters face heightened enforcement scrutiny, significant reputational damage if usage becomes public, and the possibility that authorities will view adoption as consciousness of guilt.
Our strong recommendation is that companies focus on building substantive compliance culture rather than deploying technological solutions aimed at evidence suppression. A robust compliance program includes clear policies prohibiting anticompetitive conduct, regular training for employees in roles creating antitrust risk, mechanisms for employees to raise concerns without retaliation, and accountability systems ensuring violations result in consequences. Technology should support these elements by helping identify risks and enabling swift remediation, not by obscuring conduct from regulatory view.
Companies should also anticipate regulatory developments. Competition authorities are certainly analysing AI compliance tools and will likely issue guidance within the coming year. Some jurisdictions may move to restrict or prohibit systems whose primary purpose is evidence elimination. Organisations that have already deployed such tools should reassess whether the risk–benefit calculus justifies continued use, and they should consider whether alternative approaches might achieve compliance goals with less regulatory exposure.
Looking Ahead
The Anticipated Regulatory Response
While no competition authority has yet issued formal guidance on AI compliance tools, the response is predictable based on enforcement agencies’ statements and priorities. Authorities are highly likely to view evidence-avoidance tools as facilitating anticompetitive conduct rather than preventing it. Early adopters may find themselves the subjects of investigations regardless of any underlying substantive violation because tool adoption itself suggests consciousness of wrongdoing.
Competition agencies may issue guidance restricting such tools or, in some jurisdictions, explicitly prohibiting systems designed primarily to eliminate evidence. Enhanced penalties may apply when authorities detect that evidence suppression occurred. The European Commission’s increasingly aggressive enforcement posture and US agencies’ focus on novel theories of harm suggest that both will treat systematic evidence avoidance as aggravating rather than mitigating conduct.
Critically, tools designed to prevent evidence creation differ fundamentally from tools designed to identify and remedy actual compliance risks. A system that flags potential violations so they can be investigated and corrected serves genuine compliance. A system that rewrites communications to eliminate traces of problematic conduct serves evasion. Competition authorities understand this distinction and will respond accordingly.
Overlapping Regulatory Frameworks
Companies using algorithms must navigate an increasingly complex web of overlapping requirements. The Digital Markets Act imposes specific obligations on designated gatekeepers regarding algorithmic ranking, self-preferencing, and data access. The AI Act establishes risk-based requirements for high-risk AI systems, potentially including those materially affecting competition or consumer outcomes. Sector-specific regulations in financial services, healthcare, and other industries impose additional algorithmic accountability requirements. Companies operating across multiple sectors and jurisdictions need coordinated compliance strategies that account for these intersecting frameworks rather than treating each in isolation.
International Divergence
Competition authorities are developing divergent approaches to algorithmic pricing and AI. The EU has moved toward expanding substantive rules through the Digital Markets Act and sector-specific regulations, supplemented by continued enforcement under traditional competition provisions.
US authorities are focusing primarily on enforcement against specific practices, with ongoing debates about whether existing guidelines require updating for algorithmic contexts.
The UK is developing its distinct post-Brexit approach through the Digital Markets, the Competition and Consumers Act, and the CMA’s evolving enforcement priorities.
In the Asia-Pacific region, China, Japan, Korea, and other jurisdictions are rapidly developing regulatory frameworks that blend competition law with data protection and consumer protection concerns.
This divergence creates challenges for multinational companies that cannot simply design global algorithmic systems to the highest common denominator. Different jurisdictions emphasise different concerns, require different documentation, and apply different enforcement approaches. Companies need sophisticated compliance strategies that account for these variations while maintaining operational efficiency.
Anticipated Enforcement Priorities
Based on authority statements and recent actions, several areas appear likely to attract increased enforcement attention. Hub-and-spoke coordination through common algorithmic platforms will likely see continued scrutiny, particularly in industries such as hospitality, in which the RealPage investigation has drawn attention to pricing recommendation services. The critical question for enforcers will not be whether an intermediary formally describes its output as a “recommendation,” but whether the system’s design — through defaults, delegation mechanisms, or behavioural inducements — displaces the independent pricing autonomy of participating firms so coordinated outcomes are implemented as a matter of course.
Personalised pricing will face questions about potential discrimination, though the legal framework remains unsettled. Authorities will need to distinguish exploitative personalisation by dominant platforms from ordinary commercial pricing, and the interaction between algorithmic pricing and equality or data protection law is likely to generate enforcement activity that crosses regulatory boundaries. Cases in which the personalisation mechanism is opaque and affected consumers lack any meaningful ability to contest the differentiation will attract the greatest scrutiny.
Algorithmic exclusion by platforms may see increased enforcement as authorities build on the Google Search precedent. Consistent with the implementation-control framework developed above, liability will attach not to discrete exclusionary acts but to platform architectures, such as ranking systems, self-preferencing mechanisms, and interoperability restrictions, that are designed so exclusionary outcomes are produced structurally rather than episodically.
Finally, AI-powered evidence-avoidance tools will likely attract specific regulatory attention and possible prohibition. When such tools are adopted with awareness of their capacity to frustrate investigative discovery, regulators may treat them not merely as obstruction instruments but as components of the cartel management architecture itself, potentially attracting facilitator liability and serving as an aggravating factor in penalty calculations.
Conclusion
The rapid evolution of AI and algorithmic pricing creates significant competition law risks, but these risks are manageable through proactive compliance measures. Companies should begin by auditing existing algorithmic systems for competition law risks, paying particular attention to pricing algorithms, ranking systems, and tools that incorporate competitor information. When third-party pricing tools are used, organisations should assess hub-and-spoke coordination potential and consider whether alternative approaches might reduce exposure.
Employee training should specifically address antitrust risks created by algorithmic tools, ensuring that technical staff understand the legal implications of algorithm design choices and that business personnel recognise when algorithmic conduct creates regulatory exposure. Clear policies should establish permissible and prohibited uses of pricing algorithms, with particular attention to any systems that respond to competitor signals or coordinate through common platforms.
Organisations should exercise extreme caution regarding AI tools designed to eliminate evidence of business conduct. While technology can support compliance programs through risk identification and escalation procedures, systems aimed primarily at evidence avoidance create significant regulatory and reputational risks that likely exceed any benefits. Companies should focus instead on building genuine compliance culture supported, not replaced, by technology.
Strategic priorities include building compliance-by-design approaches into algorithmic development processes from inception, maintaining robust documentation of independent business decision-making, monitoring regulatory developments across relevant jurisdictions, and fostering organisational culture that treats competition law compliance as a business priority rather than an obstacle to overcome.
Competition authorities have made it clear that they will pursue anticompetitive conduct regardless of whether it occurs through traditional means or sophisticated algorithms. The companies that thrive in this environment will be those that embrace genuine compliance, deploy technology thoughtfully in support of lawful conduct, and recognise that the most sophisticated algorithm cannot substitute for principled business judgment.
For guidance on algorithmic pricing compliance, enforcement defence, or strategic counseling regarding AI and competition law, please contact Stephen Mavroghenis and Maria Belen Gravano.
Contacts
- /en/people/m/mavroghenis-stephen-c

Stephen C. Mavroghenis
Partner - /en/people/g/gravano-belen-maria

Maria Belen Gravano
Associate
