Your privacy is very important to us.When you visit our website,please agree to the use of all cookies.For more information about personal data processing,please go to Privacy Policy.

Ethical Development of Artificial Intelligence - China's Regulatory Approach and Corporate Compliance Recommendations (Part 2)

2023-05-24

Editor's Note:

"What's the hottest trend after the metaverse? Undoubtedly, the answer is ChatGPT.

By the end of January 2023, ChatGPT surpassed 100 million monthly active users, becoming the fastest-growing consumer application in history. This news quickly ignited the ICT industry and the capital market, sparking a profound question for workers: "Will ChatGPT replace you?"

While people were curious and willing to be "guinea pigs," the past two months witnessed internet giants and leading startups launching their own large-scale AI-generated content (AIGC) products, creating a sense of an AI arms race.

Do you know:

-      What is the relationship between ChatGPT and AI, AGI, and AIGC as different forms of artificial intelligence, and how has the paradigm shifted?

-      How does the global and domestic landscape of AI big data models reflect the new "land-grab" movement?

-      What are the similarities and differences in the regulatory landscape and priorities for AIGC development in various countries, including China?

-      From the perspective of corporate compliance, what are the key considerations in terms of data, content, licenses, and infrastructure?

Centered around the above questions, the Public Policy team at Sinobravo is pleased to present the "Compliance Series: Ethical Development of Artificial Intelligence - China's Regulatory Approach and Corporate Compliance Recommendations." This two-part series shares our observations and insights into the AIGC field.

 

Perspective on Global Regulatory Trends: Pandora's Box or Prometheus' Fire?

(1) Countries worldwide have a clear regulatory intention regarding the development of AI governance, but the pace and means of implementation vary.

With the rapid development of artificial intelligence technology, its applications have encompassed various fields, from healthcare and finance to education and entertainment. AI has become an indispensable part of modern society. However, the widespread application of AI also brings forth a series of security and ethical issues, such as privacy protection, discrimination and unfairness, algorithm compliance, and more. Will AI have self-awareness? What impact will it have on society, culture, and knowledge systems? Where exactly is the "human-machine boundary"? Will humans be capable of "contending" with AI in the future? To address these issues, countries worldwide are strengthening the regulation and management of AI technology.

In fact, the "flame" of AI regulation was ignited not recently. As early as April 2021, the European Commission proposed the draft regulation of the "Artificial Intelligence Act," which is considered a milestone event for the EU in the field of AI and its broader digital strategy. However, the progress of the proposal did not proceed as smoothly as anticipated, as members of the European Parliament have not reached a consensus on the basic principles of the proposal.

The National Institute of Standards and Technology (NIST) in the United States also released Version 1.0 of the "Artificial Intelligence Risk Management Framework" (AI RMF) in January 2023, aiming to guide organizations in developing and deploying AI systems to mitigate security risks, avoid biases and other negative consequences, and enhance the trustworthiness of artificial intelligence.

Countries such as the United Kingdom, Italy, Canada, France, and Spain have also started taking action. According to the Financial Times, the UK government released its first AI white paper in March 2023, outlining five principles for AI governance. Furthermore, as reported by Reuters, on March 31, the Italian data protection authority (Garante) requested a ban on the use of the AI chatbot ChatGPT developed by OpenAI in Italy and initiated an investigation into the ChatGPT application for alleged privacy rule violations. The Office of the Privacy Commissioner of Canada recently announced an investigation into OpenAI for alleged "collection, use, and disclosure of personal information without consent."


(2) China's regulatory practices reflect the country's approach of promoting the development of artificial intelligence while strengthening risk management.

Aligned with global trends, China's regulatory practices are continuously evolving. On April 11, the Cyberspace Administration of China issued the "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)." This document outlines the social ethics that generative AI should adhere to in various stages, such as research and application, as well as legal obligations concerning intellectual property rights and personal privacy protection. It also specifies the relevant regulations and responsibilities that service platforms should abide by. This marks a significant national-level regulatory policy for the booming AI industry in China.

China is one of the earliest countries to implement governance and regulation of artificial intelligence. In recent years, China has released regulations such as the "Regulations on Algorithmic Recommendation Management for Internet Information Services" and the "Regulations on Deepfake Technology Management for Internet Information Services" to address emerging technologies and applications. The draft for comments further refines and standardizes the regulatory approach, aiming to promptly respond to and address the risks associated with technological development, while providing clearer standards and guidance for all stakeholders.

In the face of the rapidly growing generative AI industry, the draft for comments demonstrates a proactive regulatory approach. By timely presenting the key guidelines for orderly development, it allows participants in this "fast lane" to understand the boundaries that should not be crossed. This approach promotes the healthy and smooth progress of generative AI while avoiding potential harm.

Overall, the draft for comments aims to promote the healthy development and standardized application of generative artificial intelligence, sending a positive signal for industry compliance. However, upon studying the specific provisions, it is evident that regulatory authorities have put forth comprehensive compliance requirements for service providers and users of generative AI. This includes aspects such as content compliance, prohibition of algorithmic discrimination, prevention of unfair competition or monopolistic behavior, protection of legitimate rights and interests of others, provider's responsibilities, security assessment and algorithm filing requirements, and the legality of training data sources. Numerous insights and analyses have been shared by major institutions and law firms regarding the "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)," eliminating the need for further elaboration by the author.

 

Compliance Points for AIGC Enterprises

From the perspective of enterprise service agencies, we offer the following compliance observations for companies engaged in AIGC business within China for reference:

(I)Data Compliance

Given the different regulatory requirements during the stages of training, testing, and generation in the field of generative AI, compliance risks at the data level should be distinguished based on the behaviors and relevant regulations associated with each data phase.

a. Compliance risks during data collection: Generative AI involves extensive data for training. For example, in the case of ChatGPT, it claims that training data comes from public information, including web text, linguistic knowledge bases, dialogue datasets, scientific papers, and other sources. However, the data collection methods (data scraping) and training data may still infringe upon personal information, including sensitive personal information. The Personal Information Protection Law stipulates that biometric information is classified as sensitive personal information, and processing such information requires individual consent or other legal justifications.

b. Compliance risks during data processing: In the processing of relevant data and information, generative AI carries the risk of using and leaking trade secrets. For instance, company employees may input proprietary business data when using AIGC to generate related content. The input information and interaction messages from users may be used for continuous iterative training, thereby posing a risk of reusing and leaking trade secrets.

c. Cross-border data risks: When users use AIGC services and input questions through the input interface, the data of the user's conversation with the AI will be stored in the data centers of the developer or the cloud service provider they use. During the process of human-computer interaction and Q&A, the personal information and trade secrets shared by Chinese users may carry the risk of cross-border transfer.

According to Circular 37 of the Cybersecurity Law, operators of critical information infrastructure should store the personal information and important data collected and generated during their operations within the territory of the People's Republic of China. If there is a genuine business need to provide such data abroad, it should undergo security assessments according to the methods established by the Cyberspace Administration of China in conjunction with relevant departments of the State Council. If otherwise specified by laws and administrative regulations, those provisions should be followed. Therefore, if relevant companies later use overseas AIGC services during their operations, they need to strictly comply with the Cybersecurity Law, Data Security Law, Measures for the Security Assessment of Exporting Personal Information, and other relevant regulations to avoid administrative or even criminal liabilities.


(II)Algorithm Compliance

For the regulation of AIGC algorithms, the Cyberspace Administration of China, Ministry of Industry and Information Technology, and Ministry of Public Security jointly issued the "Deepfake Regulation" on November 25, 2022 (implemented on January 10, 2023), providing targeted compliance guidelines for deepfake technology. The released "draft for comments" also proposes record-filing requirements for algorithms.

Circular 24 of the "Recommendation Regulation" states that "providers of algorithmic recommendation services with public opinion attributes or social mobilization capabilities shall, within ten working days from the date of providing the service, fill in the provider's name, service form, application field, algorithm type, algorithm self-assessment report, content intended for public disclosure, and other information through the internet information service algorithm record-filing system, fulfilling the record-filing procedures."

Therefore, service providers of generative AI should complete algorithm record-filing procedures in accordance with the above regulations. Additionally, they should comply with information service norms, protect user rights, and establish sound mechanisms (such as algorithm mechanism and mechanism review, scientific ethics review, anti-telecom network fraud, log retention, emergency response management system, and rumor refutation mechanism).


(III) Content Compliance

Intellectual Property Ownership and Infringement Issues: Apart from the potential infringement risks during AIGC's front-end data collection and usage stages, there is a theoretical question worth researching and further practical testing, namely whether the generated output can be recognized as copyright-protected works and the determination of the relevant rights ownership.

Content Review: Regarding AI-generated online content, it needs to comply with the "Regulations on the Ecological Governance of Internet Information Content" and the "Regulations on the Administration of Internet Audio and Video Information Services." The entities involved in content production and content service platforms have obligations in managing online content, including not creating, copying, or publishing illegal information such as endangering national security, leaking state secrets, harming national honor and interests, inciting ethnic hatred or discrimination, undermining ethnic unity, spreading rumors, disrupting economic and social order, disseminating obscene, pornographic, gambling, violent, murderous, terrorist, or instigating criminal activities.


(IV) Licensing Compliance

In China, providing AIGC services falls under the Internet industry classification. As a prerequisite, companies must apply for and hold the appropriate value-added telecommunications business license and operating permit while complying with relevant national laws, regulations, and provisions. In day-to-day operations, companies need to establish corresponding management systems and processes to ensure compliance and uphold the rules and regulations related to value-added telecommunications services. It is essential to improve internal compliance management systems, strengthen employee compliance awareness and training, and ensure compliance by the company as a whole.

 

(V) Network Compliance

Network compliance refers to the requirement for AIGC companies to adhere to the relevant regulations set by the national government regarding the use of network and data transmission services. If companies establish or lease dedicated lines (including virtual private network VPN) to provide proxy services for users accessing overseas AIGC services, there is a higher risk of non-compliance. In January 2017, the Ministry of Industry and Information Technology issued the "Notice on Cleaning and Standardizing the Internet Network Access Service Market," which explicitly states that "without the approval of the telecommunications regulatory authority, companies are not allowed to independently establish or lease dedicated lines (including virtual private network VPN) or other channels for conducting cross-border business activities. International dedicated lines rented by basic telecommunications companies should be centrally registered with user profiles, and users should be clearly informed that they are only for internal office use and must not be used to connect domestic and overseas data centers or business platforms for telecommunications business activities."

In practice, for foreign trade companies and multinational enterprises that need cross-border networking for their own use, they can lease services from telecommunications business operators legally authorized to establish international communication entry-exit bureaus. However, please note that VPN services obtained from authorized basic telecommunications service providers can only be used internally within the company.


-END-

Make an Enquiry
Please fill out the form below and we will respond as soon as we can.
  • Ms.
    Mr.
  • PRC
    Other jurisdictions
  • ODI services
    FDI services
    Fund services
    Tax services
    Foreign exchange services
    Bank services
    Offshore services
    Public Policy services
  • Search engine
    Sinobravo website
    Brochure
    Event
    Recommendation
    Social media
  • Yes,Please
    No,Thanks
  • I have read, acknowledged and understood the《Privacy Statement》,  and agree with the contents thereof.