China’s cyberspace regulator has released draft rules for public consultation, requiring AI services that simulate human-like personalities and communication to prioritize user safety, transparency, and ethical standards. Providers must inform users they are interacting with AI and intervene if users show signs of addiction or emotional distress.
China has issued draft regulations aimed at overseeing artificial intelligence systems that mimic human traits and interactions, mandating transparency, safety, and ethical compliance for all providers offering such services to the public.
China has issued draft regulations aimed at overseeing artificial intelligence systems that mimic human traits and interactions, mandating transparency, safety, and ethical compliance for all providers offering such services to the public.
China has issued draft regulations aimed at overseeing artificial intelligence systems that mimic human traits and interactions, mandating transparency, safety, and ethical compliance for all providers offering such services to the public.
In a landmark move, China’s Cyberspace Administration has proposed new draft rules to regulate artificial intelligence systems that simulate human traits, thinking patterns, and communication styles. These regulations are designed to address the growing risks associated with consumer-facing AI, particularly those that create emotional connections with users. The draft, open for public comment until January 25, 2025, signals Beijing’s intent to shape the rapid expansion of AI technologies while safeguarding user well-being and national security.
Key Highlights
User Transparency Mandate
Providers must clearly inform users at login and periodically during use that they are interacting with AI, not a human. This rule applies to all AI systems that mimic human interaction, including digital humans and emotional companions.
Safety and Ethical Responsibility
Service providers are required to ensure safety throughout the product lifecycle, including algorithm review, data security, and personal information protection. They must also implement systems to assess users’ emotional states and intervene if users show signs of addiction or extreme emotional distress.
Content and Conduct Boundaries
AI services must not generate content that endangers national security, spreads misinformation, or promotes violence, obscenity, or any activity contrary to core socialist values. The guidelines also prohibit using AI to impersonate real individuals or disseminate unapproved content.
Regulatory Scope and Enforcement
The rules apply to all AI products and services available to the public in China that present human-like personality traits, thinking patterns, and communication styles. Non-compliance may result in severe penalties, including service suspension or criminal liability.
Sources: Bloomberg, Reuters, The Times of India, The Bridge Chronicle, China Law Translate, The Legal Wire, Geopolitechs Forklog, The Decoder, The AI Insider