Balancing Innovation and Regulation: Comparing China’s AI Regulations with the EU AI Act

In Insights

11 April, 2024

The recent passing of the EU AI Act presents an opportunity to conduct a comparative law analysis against China’s earlier promulgated artificial intelligence (AI) regulations. This is a crucial time for the industry as global attention increasingly focuses on the ethical development, deployment, and regulation of AI technologies.

China, a leading player in AI development, has established a comprehensive legislative and policy framework aimed at steering the advancement and application of AI technologies. This framework reflects a dual commitment to fostering innovation while upholding national security, ethical standards, and societal values. In contrast, the EU’s approach, embodied in the AI Act, emphasises a risk-based categorisation of AI systems, focusing on regulating high-risk applications to protect human safety and fundamental rights.

China’s Framework

China’s AI framework is a strategic blend of laws, regulations, policies, guidelines, and standards. Collectively, they aim to govern and guide the development and use of AI technologies to promote innovation while ensuring national security and compliance with ethical standards and societal values.

The Cyberspace Administration of China (CAC) is at the forefront of this regulatory regime, which has been instrumental in issuing specific regulations targeting algorithm recommendations, deep synthesis (including deepfakes), and generative AI services. These technology-specific regulations are crafted to encourage innovation within the AI industry while maintaining a degree of control over its development trajectory. By focusing on specific AI technologies, China’s regulations aim to mitigate risks associated with these advancements, ensuring that the benefits of AI are harnessed for societal good while minimising potential harm.

Key AI Regulations

  • The Internet Information Service Algorithm Recommendation Management Regulations (issued on December 31, 2021, effective from March 1, 2022)​​applies to internet information service providers utilising algorithm recommendation technologies. To establish a framework that ensures algorithmic transparency and fairness, the regulation mandates that service providers disclose the principles behind their recommendation algorithms, offer users options to customise or opt out of algorithmic recommendations, and implement mechanisms to identify and correct algorithmic biases. This proactive stance safeguards users’ rights against algorithmic manipulation, enhancing user autonomy and trust in digital platforms.
  • The Internet Information Service Deep Synthesis Management Provisions (issued on November 25, 2022, with effect from January 10, 2023) ​​applies to internet information service providers that employ deep synthesis technologies, including deepfakes and other AI-generated content that can potentially blur the lines between reality and fabrication. This regulation takes a proactive approach to curb the misuse of deep synthesis for creating and spreading false or misleading information and to ensure that such technologies are used to protect the integrity of digital communications, underscoring a solid commitment to digital trust and safety.
  • The Provisional Regulations for the Management of Generative AI Services (issued on July 10, 2023, with effect from August 15, 2023)​​​​ applies to providers offering content services through generative AI technologies, encompassing a broad spectrum of AI-generated content such as text, images, audio, and video. By setting standards for data handling, content generation, and user interaction, these regulations aim to cultivate a generative AI ecosystem that is innovative, respectful of copyright, and protective of personal privacy. They encourage the development of AI that contributes positively to society while instituting safeguards to prevent harm and misuse.

Comparing China and the EU

China’s AI regulations, as summarised above, specifically target distinct AI technologies through a tailored approach. They aim to stimulate innovation and industry growth while embedding ethical standards into a governance structure that emphasises national security, public interest, and the protection of individual rights.

Conversely, the EU’s AI Act adopts a technology-neutral stance, systematically categorising AI systems by their associated risks and enforcing strict regulations on those deemed high-risk to ensure human safety, fundamental rights, and adherence to ethical norms.

Despite their differing approaches, the AI regulatory frameworks in both China and the EU converge on promoting responsible AI development and use. They share vital objectives, including ensuring data security and user privacy, evaluating the security and risks associated with AI systems and algorithms, mandating provider accountability for their AI systems’ safe and compliant operation, demanding transparency and explainability of AI systems, and safeguarding user rights and interests.

China’s AI regulations delineate more direct responsibilities for providers in monitoring user behaviour and content moderation, distinct from the EU AI Act. Specifically, Chinese regulations mandate the establishment of content review mechanisms to filter, detect, and mitigate the distribution of illegal or harmful content. This is in sharp contrast to the EU AI Act, which does not require providers to police the behaviour of its users.

Additionally, these regulations require comprehensive security assessments, audits, and risk evaluations of AI systems and algorithms, with an emphasis on addressing identified risks and maintaining ongoing compliance through updates and corrections. This includes a more pronounced level of government oversight and cooperation with authorities than required in the EU and other countries and regions.

Furthermore, China mandates a registration process for all AI providers, ensuring a record with government authorities. This contrasts with the EU’s approach of registering only high-risk AI applications in a public database. This highlights a more authoritative regulatory stance in China, with a broader scope of oversight and intervention in AI system management and operation, reflecting a distinct approach to AI governance focused on national security and public order.

By addressing specific AI technologies that may pose risks for individuals as well as society at large, China’s objective appears to be to harness AI’s benefits for societal good while mitigating risks associated with its rapid advancement and integration into daily life. This regulatory approach seeks to protect humans in their use of AI technologies and to guide and encourage innovation and development in such technologies.

While the regulatory regimes of both the EU and China aim at ensuring AI’s safe and ethical use, China’s approach is notably proactive in specifying operational guidelines for AI services, reflecting a unique blend of government oversight and technological innovation to address critical facets of AI’s interaction with society and individual rights, underscoring a recognition that artificial intelligence technologies will have profound impacts on society, the economy, and individual rights.

An in-depth understanding of the intricacies of China’s AI policy framework and these regulations is crucial for companies seeking to navigate China’s complex and evolving AI legal landscape.

You may also be interested in:

New rules for medical devices and in vitro diagnostic medical devices

In recent years, several new and amended requirements have been introduced for medical devices and in vitro diagnostic devices

Read more...

Danish Copyright Act – You may (still) lawfully use copyrighted works for parody purposes

An amendment to the Danish Copyright Act will enter into force on July 1, 2024. The amendment codifies a

Read more...

Mobile Sliding Menu