Understanding China's AIGC regulation: In-depth guide and context
A framework with prudence and growth-oriented principle
For the governance of generative AI, both China's official and civilian sectors have recognized and reached a consensus: the United States still leads in technology, and China is determined to catch up with the pace of technological development first and embrace the challenges, without further hesitation.
Three months after a hotly debated draft, China’s regulatory departments collaboratively announced a formal legal document to regulate the generative AI sector. Compared with the previous draft, this official version has many substantial changes, restoring the confidence and enthusiasm of the Chinese AI industry. Analyzing changes between versions of regulatory documents is an important method to understand the thinking behind regulations. In this article, we will analyze changes between the draft and formal document, what the changes say about the dynamic of China's AI regulation and what it means for the development of the Chinese AI industry.
Key Takeaways
After the consultation on the draft, the official legal document indicates that China's regulators have chosen an "inclusive and prudent" principle on AI regulation, with a regulatory zone mainly focusing on public-facing generative AI services in mainland China, whereas internal R&D and non-public-facing AI applications are not the target of regulation, but rather the direction to be encouraged.
Industry-specific regulation under multiple regulatory authorities for AI regulation has been preliminarily established, and AI regulation in China becomes more industry-oriented and scenario-oriented, which could effectively improve the limitations of the perspective and resources of a single sector regulator, but it remains to be seen whether it will lead to further regulatory overlap and procedural complexity.
Quick Guide:
This is a comprehensive analysis consisting of 5 parts, you can get to the sections that interest you the most. Some legal writings were included as this article is intended for a broader audience with diverse backgrounds and cultures.
I. Context for intro (about 3 mins)
II. Legal part: China's new regulatory document on AI from draft to formal (about 8 mins)
III. China's main principles on AI Regulation (about 7 mins)
IV. A basic guide for those who are or to be practitioners and investors of the AI industry in China (about 4 mins)
V. What else to be further observed (about 2 mins)
I. Key Background Context
Unlike metaverse which is mostly at the concept stage, the emergence of ChatGPT or ChatGPT-like large language models (LLMs) has truly changed everything. These new AIs are revolutionary because these programs woven by the programming languages begin to understand human thoughts and learn to express them. With intelligent content generation capabilities, such so-called "generative AI" or "general AI" can chat, paint, program, create movies, and sing with the voice of famous singers flawlessly. Perhaps not everyone is interested in such progress of AI technology, however everyone is being affected by it to varying degrees.
In China, although the Chinese public has always been highly receptive to new technologies, especially since the era of reform and opening-up, this time is different. As elsewhere, the reactions of the Chinese public to generative AI are diverse and complex. Some see the business opportunities and the potential to empower individual imagination and creativity, while others notice the dangers and fear of being overtaken or even replaced by AI, teased or cheated by AIGC. And of course, there are many who have yet to realize that AI and AIGC are already creeping into their lives, and there are also some forgotten corners of society where AI is not a topic of concern. Some may have no knowledge of AI and are more focused on their immediate survival, as there may not be many people caring about them.
People in various countries lead intertwined lives, and some may have just learned about AI while others are already aware of its existence. As governors of society, regulators may not be as aware of the real-time dynamics of technology as the technical community, but they at least notice the potential impacts and implications of AI earlier than the general public. They think about a problem earlier than the average person exposed to AI: How should we approach AI and its impacts, whether they are positive or negative?
Regulators around the world originally had more time to contemplate this issue, but the “technological explosion” of ChatGPT caught them off guard. Whether they feel excitement, concern, fear, or a mix of emotions after their initial understanding, one thing remains constant – they cannot choose to be indifferent due to their responsibilities. They also can't wait for the tech industry to fully understand the magic behind generative AI before taking action – new assignments have already arrived, and society is waiting for them to deliver.
Several European countries took immediate actions – some chose to unplug AI's power plug (like Italy's data protection authority did in March, although they plugged it back in a month later), while others knocked on AI's door, saying it's surrounded (like France and Spain). The European Union said, "Maybe we should have a meeting to study this first." Meanwhile, in the United States, OpenAI's CEO, Sam Altman, felt the gaze of regulators and simply raised his hands, saying (not literally), "I guess I can smell trouble in the air, and I'm a bit overwhelmed." Chinese regulators just finished studying Deepfake, and on their way home, they read the news of ChatGPT: Fine. Times have changed, again. (If you would like to know more seriously what the regulators are doing, you're welcome to read some summaries Here.)
Regardless of how quickly the EU officially promulgates its currently negotiating and drafting AI Act, how Sam Altman expressed Open AI's strong willingness to embrace regulation during a previous testimony before the U.S. Congress, research and practice of AI technologies often outpace social norms and regulations. Moreover, the more promising AI technology is to be developed in a country/economy, the more the regulator of such country/economy may not avoid even a harder question: How to strike a balance if the social impact of AI is like a delicate scale, with opportunities and development on one side, and safety and challenges on the other?
Due to the rapid advances of AI technologies, regulators may need to adopt a more forward-thinking approach, and manage the overall regulatory principle and direction.
II. Legal part: China's new regulatory document on AI from draft to formal
China's policymakers are deeply concerned: Even if only focus on domestic matters, challenges in the post-COVID-19 era are emerging one after another: Economic recovery, education, employment, and now generative AI. Challenges seem to emerge endlessly.
Almost simultaneously with the release of ChatGPT, there was a flurry of attention and experimentation within China's private sector. Tech self-media platforms like "量子位" and "机器之心" immediately exploded in popularity by tracking AI developments. I also joined some AIGC communities to learn and observe. People are highly creative, discussing various "prompts". They also explore how to create more diversified scenario applications through API interfaces, as well as the development of domestic LLMs. People are also looking forward to how the Chinese authorities will respond. However, this anticipation is tinged with anxiety and unease. People know that generative AI is not entirely controllable, and there is a phenomenon in the tech community called "Hallucination" (answering unreliable or completely incorrect content with a confident tone). There are concerns about issues not conforming to political correctness and worries about being completely banned by a single executive order.
On the official side, the Chinese government has decided to entrust this task to the Cyberspace Administrative of China ("CAC"), the main regulatory department for cyberspace. Therefore, in April of this year, less than five months after ChatGPT's debut, the CAC announced to everyone: "Hey guys, I've got a plan." — Measures for the Administration of Generative AI Services (Draft for public comments) ("Draft"). What is clear is that the CAC has a strong desire to get ahead of other economies, such as EU and US, before they come up with formal AI regulatory documents. As the title suggests, the regulatory target is very specific: generative AI. Specifically, it focuses on generative AI services provided to the public in China.
Various sectors' perspectives on the Draft: many uncertainties and a conservative approach
As the first generative AI-related legislation in China, the Draft has garnered widespread attention from the technology, investment, and legal communities. During the public consultation phase (within one month of its release), experts and practitioners from various fields have expressed mixed views, but most are leaning toward concerns. While the Draft includes a clause encouraging AI development, it is seen more as a symbolic gesture, as the majority of its requirements lean towards absolute and stringent regulations. This suggests that although it does not outright ban AI like Italy, the regulators are still more focused on the potential negative aspects of AI, especially AIGC. For example:
The Draft stipulates that AIGC must be truthful and accurate, without any false information. However, experts and practitioners see this requirement as overly idealistic, given the inherent fictional nature of AIGC, which clashes with the technical principles. It's like asking someone to write a novel but demanding that the plot cannot be fictional. The issue of "illusion" mentioned earlier still remains challenging to address at the technical level.
Some provisions also pose practical implementation difficulties from a technical standpoint. For instance, the Draft specifies that content cannot contain anything that "may disrupt economic and social order" or violate laws, and if found, it must be corrected within three months and not repeated. The problem lies in the vagueness and broadness of the standards, as AIGC results still involve numerous uncontrollable factors under current technological conditions, making it challenging to guarantee compliance within the specified timeframe.
Additionally, there is uncertainty regarding the impact of certain regulations on the development of the AI industry. For example, the Draft applies to content generative AI services provided to the Chinese public within China's borders, but it does not clearly state whether it applies to research, non-general public (e.g., internal R&D, To B business), or non-Chinese public.
Moreover, the Draft places the primary or substantial responsibility on AI service providers for possible issues and consequences arising from the generated content, with the risk of termination of services and fines up to 100,000 RMB. Many experts have questioned who holds the greater responsibility for AIGC results – the AI providers or the users. Accordingly, is it fair for AI service providers to bear full responsibility for content compliance? While the economic penalty amount may not seem significant, for a technology that could be used by a large number of users simultaneously, calculating the number of violations and fines based on API calls or generated instructions may lead to astonishing amounts.
These combined factors have left the Chinese AI industry feeling somewhat disheartened: If the official version does not change significantly, entrepreneurs who were eager to venture into AI services in China may now sigh and depart due to the weighing compliance and commercial uncertainties. I have noticed that many even pessimistically comment, "The development of domestic AIGC, a look to the end."
Formal version of the legal document: changes happen, quickly and substantially.
Amidst controversies and discussions, on July 13, 2023, after a three-month interval, the formal version of the Draft was released - the Interim Measures on Generative AI Services ("Interim Measures"). In the title, the word "Interim" is added. Additionally, the publishing authority is no longer just the CAC, but seven different departments in total. This document will officially come into effect on August 15.
Upon carefully comparing the differences between the Interim Measures and the Draft, many legal professionals, including myself, have come to realize that such changes are not simple.
The main contents and modifications of the Interim Measures are as follows:
Keep reading with a 7-day free trial
Subscribe to Baiguan - China Insights, Data, Context to keep reading this post and get 7 days of free access to the full post archives.