The Cybersecurity Administration of China (CAC), China’s data protection and cybersecurity watchdog, recently passed the final text of the Internet Information Service Algorithm Recommendation Management Regulations, an extensive set of rules – one of the most fully developed artificial intelligence (AI) regulations in the world – designed to govern the use of AI-based recommendation algorithms.
From ByteDance to Tencent Holdings, companies in China and beyond are using recommendation algorithms as one of the most widespread types of AI in use today. The new regulations, which were published along with a series of related FAQs for journalists, are a part of China’s ongoing efforts to regulate technology platforms in contexts ranging from securities regulation and competition to data security and privacy. When implemented, the regulations will have a major impact on companies relying on recommendation algorithms both inside and outside China.
The regulations, based off a draft promulgated this past summer, clearly state their goal, to “carry forward the Socialist core value view, safeguard national security and the social and public interest.” They apply to “algorithmic recommendation service providers” – i.e., companies that use algorithmic technologies to “provide information to users,” such as app operators who use algorithmic recommendations in their platforms. The regulations require these providers to “uphold mainstream value orientations, optimize algorithmic recommendation service mechanisms, vigorously disseminate positive energy, and advance the use of algorithm upwards and in the direction of good.”
The regulations set out a number of principles that are designed to guide the activity of algorithmic recommendation service providers. Such providers must: take “measures to prevent and curb the dissemination of harmful information” including the generation of “fake news”; avoid setting up algorithmic models that “violate laws and regulations or ethics and morals, such as by leading users to addiction or excessive consumption;” and not carry out “monopolistic or improper competition acts.”
Under the regulations, providers must immediately halt the spread of any illegal information that they discover, effectively placing responsibility for content moderation on platforms that, in the U.S., would be immune from civil liability under Section 230 of the Communications Decency Act. Providers must not only take corrective actions when illegal information is discovered, but also report any such discovery to the CAC.
Algorithm providers are further instructed to offer a series of features designed to protect user rights, including notifying users in a clear manner about the basic principles, purposes and operational mechanisms of an algorithm; providing users with a choice “to not target certain of their individual characteristics”; and installing “convenient and efficient user complaint and public complaint and reporting access points.” To afford users these rights, developers may need to create an interface where users can view their profiles and actively select and remove keywords used by the recommendation algorithm, to test alternative outcomes. This would be a first for any regulation of algorithms anywhere in the world.
The regulations also contain specific requirements for services that are provided to minors, stating that algorithm providers may not provide minors with information that may incite the imitation of unsafe conduct or lead to “online addiction.” Moreover, the regulations require that providers of algorithm recommendation services to the elderly must consider the “elderly’s requirement in going out, undergoing medical treatment, consumption, handling affairs, etc.” The regulations also acknowledge labor rights, specifying that providers of algorithm recommendation services to workers must guarantee workers’ legal rights and interests, such as renumeration, rest, and vacation.
Individuals or organizations that discover activities that violate these provisions may file a complaint or report with the government. Relevant government departments may issue a warning or report and order rectification within a limited time. The government may impose a fine of between 10,000 and 100,000 yuan (between $1,570 and $15,700, amounts that have doubled since the draft regulations) if the violations are not corrected or the circumstances are “grave.” Violations may also be prosecuted with criminal liability.
Like the recently passed Personal Information Protection Law (PIPL), which came into force last November – less than three months after it passed in the National People’s Congress – the regulations will take effect shortly after they were published, on March 1, 2022. In light of this looming deadline, companies that rely on algorithms for content recommendations and targeted advertising, as well as the platforms that provide these services, should begin to consider how the regulations might affect their business.
Companies that operate both in China and overseas may have to pursue a dual-tracked approach, ensuring that their Chinese operations comply with the regulations while maintaining another set of services that align with other countries’ approaches. Nonetheless, how these regulations are enforced and how companies adapt to their terms will likely affect the global AI ecosystem.