Skip to content

China Regulates AI Services that Mimic Humans, Why Now?

China,

China’s cyberspace regulator has released draft rules for artificial intelligence services that imitate human personalities and offer emotional interaction, seeking to support the sector’s development while safeguarding national security, privacy and mental health.

Public Consultation on New AI Companion Rules

On Dec 27, the Cyberspace Administration of China issued a draft set of interim measures for managing what it calls anthropomorphic interactive services and released it for public comment until Jan 25, 2026.

The draft is presented as a framework to guide the rapid rollout of virtual companions and similar services, with the stated aim of encouraging innovation while preventing serious social and security risks. It ties the new regime to existing laws on data security, personal information protection and online content management.

The proposal covers products and services that use artificial intelligence to simulate human character traits, thought patterns and communication styles, and that engage users through text, images, audio or video in what is framed as emotional interaction.

Regulators say providers will carry the main responsibility for keeping these systems safe and orderly, and will need internal procedures for algorithm and ethics review, content checks, cyber and data security, protection of personal information and plans for dealing with major incidents and online fraud.

Protecting Users from Harmful Content and AI Addiction

According to the draft, content generated by these services would be barred from endangering national security, undermining national unity or state interests, spreading rumors that disrupt economic or social order, or promoting pornography, gambling, violence or crime. Insults, defamation and other violations of individuals’ lawful rights would also be prohibited.

The draft places strong emphasis on psychological and emotional risks. Providers are told not to design products whose goals include replacing real world social interaction, controlling users’ psychology or fostering addictive dependence. They must be able to assess users’ emotional state and degree of reliance and step in when they detect extreme emotions or signs of excessive use.

When conversations suggest a serious threat to a user’s life, health or property, the operator should use preset response templates that offer reassurance and encourage the person to seek help, and should provide information about professional support channels. In clear cases of suicide or self harm intent, the draft requires that a human take over the interaction and that guardians or emergency contacts be reached where possible.

Extra Safeguards for Minors, Elderly Users and Personal Data

According to the draft, companies would need to offer a distinct minors mode with options, such as usage time limits, periodic real world reminders and tools for guardians, to block certain roles, review summary records of use and prevent spending inside the service. Services that offer emotional companionship to minors would require explicit consent from a guardian.

Providers would also have to be able to identify suspected minors and, while protecting privacy, switch them into minors mode and provide channels for appeal, where guardians would be able to request deletion of a child’s past interaction records.

Elderly users are singled out as another priority group. Providers are encouraged to help older people set emergency contacts and must notify those contacts if they spot situations that may threaten an older person’s life, health or financial safety. The draft bans services that imitate an elderly user’s relatives or specific close relations.

On data handling, operators would be required to encrypt interaction data, control access and keep network logs. They would not be allowed to pass users’ interaction records to third parties without a legal basis or explicit consent, and data collected in minors mode could only be shared with separate approval from a guardian.

Furthermore, training data for these systems must reflect core values and Chinese traditional culture. Providers are instructed to clean and label data sets, improve diversity, guard against poisoning and tampering, and evaluate the safety of any synthetic data used to train or fine tune models.

Services would also have to make clear that users are interacting with an artificial intelligence system rather than a person. The draft calls for pop up style reminders when people first use or log back into a service and when providers detect signs of over dependence or addiction, as well as prompts to pause use if a person has been continuously engaged for more than two hours.

Safety Reviews and Platform Responsibilities

Companies launching anthropomorphic interactive features, introducing major technical changes or reaching at least one million registered users or one hundred thousand monthly active users would have to carry out safety assessments and submit reports to provincial level regulators, according to the draft.

App stores and other distribution platforms would need to verify that such applications have completed required assessments and filings, and have the ability to refuse listing, issue warnings, suspend service or remove apps that break the rules.

Penalties and Wider Regulatory Push

Violations would be punished under existing laws and regulations, while in areas not explicitly covered, regulators could issue warnings or public notices, order corrections within a set period, or require companies to suspend related services in serious cases or if they refuse to comply.

The proposal follows earlier Chinese rules for recommendation algorithms and generative artificial intelligence services and comes as regulators around the world look more closely at the rapid rise of chatbots and virtual companions that blur the boundaries between machine and human contact.

Why Now?

This move comes as AI tools are increasingly used to scam people, steal data and harass users, and as virtual companions raise fresh worries about emotional harm and addiction.

Local and regional outlets have reported a string of AI related incidents, from deepfake video call scams in mainland China and Hong Kong that led to victims wiring millions of yuan and more than HK$200 million to fraudsters, to Chinese reports of young people in China and Taiwan turning to chatbots for cheap, discreet mental health support, and overseas cases where teenagers took their own lives after forming intense emotional bonds with AI companions.

The draft is expected to put more responsibility on providers, detailing how they must keep services safe, protect user data and step in when their systems are misused or cause damage. It also makes clear that companies can be held accountable, through warnings, legal orders, and in serious cases, suspension of related services.

Read More: Nvidia Targets Groq In $20 Billion Deal As (NVDA) Stock Nears $190

Disclaimer: All content provided on Times Crypto is for informational purposes only and does not constitute financial or trading advice. Trading and investing involve risk and may result in financial loss. We strongly recommend consulting a licensed financial advisor before making any investment decisions.

Ebrahem is a Web3 journalist, trader, and content specialist with 9+ years of experience covering crypto, finance, and emerging tech. He previously worked as a lead journalist at Cointelegraph AR, where he reported on regulatory shifts, institutional adoption, and and sector-defining events. Focused on bridging the gap between traditional finance and the digital economy, Ebrahem writes with a simple, clear, high-impact style that helps readers see the full picture without the noise.

Zoomable Image