The recent passing of the EU AI Act presents an opportunity to conduct a comparative law analysis against China’s earlier promulgated artificial intelligence (AI) regulations. This is a crucial time for the industry as global attention increasingly focuses on the ethical development, deployment, and regulation of AI technologies.
China, a leading player in AI development, has established a comprehensive legislative and policy framework aimed at steering the advancement and application of AI technologies. This framework reflects a dual commitment to fostering innovation while upholding national security, ethical standards, and societal values. In contrast, the EU’s approach, embodied in the AI Act, emphasises a risk-based categorisation of AI systems, focusing on regulating high-risk applications to protect human safety and fundamental rights.
China’s AI framework is a strategic blend of laws, regulations, policies, guidelines, and standards. Collectively, they aim to govern and guide the development and use of AI technologies to promote innovation while ensuring national security and compliance with ethical standards and societal values.
The Cyberspace Administration of China (CAC) is at the forefront of this regulatory regime, which has been instrumental in issuing specific regulations targeting algorithm recommendations, deep synthesis (including deepfakes), and generative AI services. These technology-specific regulations are crafted to encourage innovation within the AI industry while maintaining a degree of control over its development trajectory. By focusing on specific AI technologies, China’s regulations aim to mitigate risks associated with these advancements, ensuring that the benefits of AI are harnessed for societal good while minimising potential harm.
China’s AI regulations, as summarised above, specifically target distinct AI technologies through a tailored approach. They aim to stimulate innovation and industry growth while embedding ethical standards into a governance structure that emphasises national security, public interest, and the protection of individual rights.
Conversely, the EU’s AI Act adopts a technology-neutral stance, systematically categorising AI systems by their associated risks and enforcing strict regulations on those deemed high-risk to ensure human safety, fundamental rights, and adherence to ethical norms.
Despite their differing approaches, the AI regulatory frameworks in both China and the EU converge on promoting responsible AI development and use. They share vital objectives, including ensuring data security and user privacy, evaluating the security and risks associated with AI systems and algorithms, mandating provider accountability for their AI systems’ safe and compliant operation, demanding transparency and explainability of AI systems, and safeguarding user rights and interests.
China’s AI regulations delineate more direct responsibilities for providers in monitoring user behaviour and content moderation, distinct from the EU AI Act. Specifically, Chinese regulations mandate the establishment of content review mechanisms to filter, detect, and mitigate the distribution of illegal or harmful content. This is in sharp contrast to the EU AI Act, which does not require providers to police the behaviour of its users.
Additionally, these regulations require comprehensive security assessments, audits, and risk evaluations of AI systems and algorithms, with an emphasis on addressing identified risks and maintaining ongoing compliance through updates and corrections. This includes a more pronounced level of government oversight and cooperation with authorities than required in the EU and other countries and regions.
Furthermore, China mandates a registration process for all AI providers, ensuring a record with government authorities. This contrasts with the EU’s approach of registering only high-risk AI applications in a public database. This highlights a more authoritative regulatory stance in China, with a broader scope of oversight and intervention in AI system management and operation, reflecting a distinct approach to AI governance focused on national security and public order.
By addressing specific AI technologies that may pose risks for individuals as well as society at large, China’s objective appears to be to harness AI’s benefits for societal good while mitigating risks associated with its rapid advancement and integration into daily life. This regulatory approach seeks to protect humans in their use of AI technologies and to guide and encourage innovation and development in such technologies.
While the regulatory regimes of both the EU and China aim at ensuring AI’s safe and ethical use, China’s approach is notably proactive in specifying operational guidelines for AI services, reflecting a unique blend of government oversight and technological innovation to address critical facets of AI’s interaction with society and individual rights, underscoring a recognition that artificial intelligence technologies will have profound impacts on society, the economy, and individual rights.
An in-depth understanding of the intricacies of China’s AI policy framework and these regulations is crucial for companies seeking to navigate China’s complex and evolving AI legal landscape.