As we previously shared, entrepreneurs, government officials, investors, and scholars in China have paid close attention to generative AI in an effort to catch up with the US.
One area where China has acted quickly is in regulating generative AI. On April 11, the Cyberspace Administration of China (CAC) released the "Administrative Measures for Generative Artificial Intelligence Services (Draft for Solicitation of Opinions)" (Link to Chinese Original; Link to English Translation), which was released less than six months after OpenAI's ChatGPT. This kind of speed is rare in China's legislative processes. The government has indicated that it wants to both develop and regulate the AI industry, and the development of artificial general intelligence was emphasized at a recent Politburo meeting.
However, the Draft has attracted much discussion in China, with thousands of articles reacting to it. Today, we are sharing one of them to give you a snapshot of the robust discussion on the ground. It is a translated report from the Southern Metropolis Daily. In the article, the reporter gathered opinions from experts on the ground who discussed many different issues around the Draft, including the prevention of false information, distribution of responsibility, discriminative content, and enforcement methods. It is noteworthy to see discussion from so many different aspects, which reflects the complexity of the topic and how much more work may be needed before the Draft comes to effect.
As a general context, we note that this is a measure issued by the CAC, which is an institution with a role in both the administrative system and the party system. The official process for formalizing such a measure would require higher-level approval from both the CPC and the government. We also note that other departments of the government and the party may also issue their legislation or directive regarding this topic. It is clear that there will be more progress down the road, and we expect the government and industries to continue to act speedily on this topic.
We translated the main parts of the article:
AIGC has "creativity", making it difficult to ensure the accuracy of the content.
Since its release, ChatGPT has been popular, and the competition among products integrating ChatGPT is becoming increasingly fierce. In China, Baidu released the pre-training generative language model "Ernie Bot" in March, and Alibaba Group has officially launched the "Tongyi" large model. The launch of the localized ChatGPT battle in China means that a new wave of technology is coming.
With the intensive launch of large language models by domestic companies, regulatory authorities are also taking action. New regulations will be introduced for the field of artificial intelligence-generated content (also known as generative AI).
According to the official website, the Cyberspace Administration of China has drafted the "Administrative Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" and publicly solicited opinions from the society. The deadline for feedback is May 10th this year.
The "Draft" shows that in order to promote the healthy development and standardized application of generative artificial intelligence, this measure is formulated based on laws and regulations such as the Cybersecurity Law, Data Security Law, Personal Information Protection Law.
At the beginning of this year, ChatGPT quickly became popular due to its excellent text generation and dialogue interaction capabilities, but at the same time, there have been controversies about its low accuracy and authenticity of answers and low adoption value.
The "Draft" stipulates that the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent the generation of false information. However, opinions on this matter vary.
Senior data legal expert Yuan Lizhi believes that this requirement may conflict with the inherent properties of generative AI. Currently, generative AI models are mainly based on existing language data to generate new content based on probabilities, with a certain degree of "creativity," and do not incorporate any real-time database for verification. In this case, it is difficult to ensure the authenticity and accuracy of the generated content. "If this requirement is strictly enforced, generative AI products may not be able to enter the market."
However, Yuan also stated that he can understand the intention behind this regulation, which is to prevent generative AI-generated content from being abused and used for false propaganda and public opinion manipulation. In his opinion, it is more appropriate to regulate from the application level rather than the technical level.
Yu Yang, assistant professor at the Institute for Interdisciplinary Information Science, and director of the International Exchange Project of the Institute for AI International Governance, Tsinghua University, said that the requirement for accurate and truthful content generation is correct in principle, but clearer supporting technical standards and policy measures is required in the future. He pointed out that the provider side is the stronger party and has sufficient resources and capabilities to solve such problems, but the majority of users do not have the corresponding ability to identify the authenticity of generated information. "(The requirement to identify and label the authenticity and accuracy of generated content) is technically feasible, but it depends on whether there is the willingness to invest."
Zhang Linghan, professor at the Institute of Data Law of China University of Political Science and Law, pointed out that many relevant laws and regulations in China have made specific institutional arrangements for service providers to establish sound rumor-refuting mechanisms in response to false information. The "Draft" only emphasizes that generative AI should not spread or create false information, which is a general requirement in the field of Internet information content in our country and is not a new regulation.
Users and end-users should also bear responsibility for content production
Regarding responsibility, the "Draft" proposes providers who use generative AI products to provide services such as chat, text, image, and sound generation, including by providing programmable interfaces to support others in generating text, images, sound, etc., shall assume the responsibility of content producer for such generative product. If personal information is involved, they shall assume the statutory responsibility of personal information processors and fulfill their personal information protection obligations.
Yuan pointed out that the concept of "provider" may have borrowed from relevant drafts in the European Union, but the "Draft" only requires that providers assume the responsibility of content producers and does not mention the content production responsibility of users and end-users. "In conjunction with the context, this can easily lead to a misunderstanding that the provider should assume all content-related responsibilities."
[Baiguan: for those who are interested to compare the legislative thinking of EU, US and China, you can check out King & Wood Mallesons’ comparative article in Chinese here.]
He further analyzed that this provision may lead to an excessive burden of responsibility on the provider, resulting in the problem of uneven distribution of responsibility. Although the platform is the most responsible party, content generation should be the result of the joint efforts of users, end-users, and other parties, and the latter should also bear some responsibility for content generation. For example, when the user provides some text and commands AI to generate an essay or an image, they should also avoid providing content that violates regulatory requirements, thereby avoiding illegal and non-compliant content generated by AI.
Yu Yang also shares a similar view. He pointed out that users who use generative AI for content creation should share responsibility, and the platform's responsibility cannot be exempted - only by sharing responsibility to some extent with the platform can it have the motivation to correct and modify problems in the model. "This is not only a typical responsibility-sharing issue, but also a risk management issue."
Specifically, there are three important risk management nodes in this process. The first is during algorithm development; the second is during algorithm use, i.e., during content generation; and the third is during content publication. From the perspective of risk supervision, in fact, each node should be assigned corresponding responsibilities.
A senior AI expert also added that the responsibility of the user is derivative, and the responsibility of the provider is the focus of regulation.
In addition, what responsibilities must be borne for violating the "Draft"? It is stipulated that if there is no provision in the "Three Laws" [Baiguan: Cybersecurity Law, Data Security Law, Personal Information Protection Law] and other laws and administrative regulations, the CAC and relevant departments shall issue to the violators warnings, public citation, and time limit and order corrections accordingly; if violators refuse to make corrections or the circumstances are serious, they shall be ordered to suspend or terminate the provision of services using generative AI and a fine of no less than 10,000 yuan and no more than 100,000 yuan shall be imposed. If it constitutes a violation of public security, the punishment of public security management shall be given according to law; if it constitutes a crime, criminal responsibility shall be held according to law.
Yuan said that the post-event suspension or termination of services will have a significant impact on relevant companies. Under the premise of controllable risks, regulatory measures should try to avoid inhibiting the development of the industry-after all, AI is a key industry supported by the country and an important area of international competition.
Security assessment and algorithm filing are the main regulatory measures of AIGC.
For a long time, real-name registration has been the basic way of internet management in China. The "Draft" specifies that the provider should comply with the provisions of the "Cybersecurity Law" and require users to provide real identity information.
Yuan Lizhi pointed out that the corresponding provision is Article 24 of the "Cybersecurity Law", which requires real-name registration. He analyzed that the requirement for users to provide real identity information means that the information used for registration should be real-name information when users of AI products register.
According to Article 24 of the "Cybersecurity Law", when network operators provide network access, domain name registration services, mobile phone and fixed-line registration procedures, or information release and instant messaging services to users, they should require users to provide real identity information when signing agreements or confirming service provision. If users do not provide real identity information, network operators shall not provide relevant services.
Yuan Lizhi emphasized that the requirement of real-name registration is reflected in many relevant laws and regulations, not limited to the scenarios of network access, instant messaging, and information release listed in Article 24 of the "Cybersecurity Law". "The requirement of real-name registration in practice is more extensive, and all internet information services require real-name registration."
Zhang Linghan added that the "Regulations on the Deep Synthesis of Internet Information Services" implemented since January this year also mentioned real-name registration under the requirements of the "Cybersecurity Law". The idea is that users input specific content to generate synthetic text, audio, and video, which is also a type of information release behavior. However, in the scenario of generative AI, it is still worth discussing whether users' questions and AI answers belong to the user's information release behavior.
To promote the healthy development and standardized application of generative AI, the "Draft" also proposes requirements for enterprises before providing services to the public using generative AI products. Specifically, before using generative AI products to provide services to the public, enterprises should apply for security assessments to the CAC in accordance with the "Regulations on the Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities", and perform algorithm filing, change, and cancellation procedures in accordance with the "Regulations on the Management of Algorithm Recommendations for Internet Information Services".
In Zhang Linghan's view, security assessment and algorithm filing have always been important means of information service management in China. In the field of information content security, the "Draft" does not create many new regulatory requirements for generative AI, but it mainly emphasizes the past information content security regulatory principles, as well as supporting regulations for the past regulatory system. China has already established relatively complete regulations and practices in these two areas, and when applied to generative AI, the focus should be on whether enterprises can detect potential risks in security assessments and whether they have the ability to respond in a timely manner when risks occur.
"Previously, the enforcement of these two regulations (security assessment, algorithm filing) was limited, and now they are trying to make them the main regulatory focus of AIGC," Yuan Lizhi added.
A Southern Metropolis reporter observed that the draft includes several provisions to prevent discrimination. For example, Article 14 states that "measures should be taken to prevent discrimination based on race, nationality, religious, country, region, gender, age, occupation, etc. in the process of algorithm design, training data selection, model generation and optimization, and service provision," and Article 12 states that "providers shall not generate discriminatory content based on users' race, nationality, gender, etc." Why is anti-discrimination so important in the field of generative AI?
Zhang Linghan stated that anti-discrimination has always been an important requirement in China's algorithmic regulatory system. However, in the past, legislation focused on whether consumers received fair algorithmic decision results as compliance requirements. This time, requirements were explicitly stated to prevent discriminatory content based on race, nationality, gender, etc., which is relatively rare.
"Discrimination caused by data or algorithms is relatively hidden and can cause significant harm to individuals' basic rights, and even affect the public interest of society," Yuan Lizhi said when analyzing the reason why "fairness and anti-discrimination" are highly valued in the entire AI field.
Overall, in Yang's opinion, the provisions in the current "Draft" still reflect strong principles, and relevant authorities should accelerate the research and development of AI governance technology. Such technology represents the level of AI governance, which is actually as important as so-called large-scale model development. "If the AI governance technology is powerful enough, then it will have an advantage in the global AI market competition in the future."
Related Reads
Baiguan is powered by BigOne Lab, China's leading data-driven provider of market research and analytics that was founded in 2016 by information services industry veterans from Bloomberg, BlackRock, YipitData, and Nielsen. BigOne Lab's products and services are trusted by 100+ top institutional investors and corporates around the world. BigOne Lab is also invested in by a number of institutional investors, including S&P Global.