China’s latest proposal to tighten controls on artificial intelligence is being framed as a child-protection measure. In reality, it reads like a familiar expansion of state supervision—this time into the most intimate digital spaces people use for companionship, advice and emotional support.
Draft rules released over the weekend by the Cyberspace Administration of China (CAC) would impose sweeping obligations on AI developers, from limiting how long children can use chatbots to requiring guardian consent before offering so-called “emotional companionship” services. Operators would also be required to hand over conversations involving suicide or self-harm to human moderators and alert guardians or emergency contacts.

The rules go well beyond child safety. AI systems would be barred from generating content deemed to threaten national security, harm China’s “national honour,” or undermine unity—language that echoes long-standing censorship frameworks applied to social media, news and entertainment platforms. The result would effectively extend China’s content controls into AI-driven conversations that are often private, personalised and difficult to monitor at scale.
Beijing says the measures are needed amid a rapid proliferation of chatbots, many of which have attracted millions of users seeking companionship, therapy or advice. But critics may note that the government’s concern appears selective: while it encourages AI applications that promote local culture or support the elderly, it insists on strict oversight whenever AI risks becoming an unsupervised space for expression or emotional reliance.
The timing is also notable. Chinese AI firms have been gaining momentum, with DeepSeek attracting global attention earlier this year after topping app download charts. Startups such as Z.ai and Minimax, which together claim tens of millions of users, have announced plans to list publicly. Tighter regulation could reassure investors wary of reputational or political risk—but it could also entrench regulatory uncertainty in a sector that depends on scale and experimentation.
Globally, the debate over AI safety has intensified, particularly around how chatbots handle sensitive conversations. OpenAI chief executive Sam Altman has described responses to self-harm as among the company’s hardest challenges, and the firm has faced lawsuits and public scrutiny over its safeguards. Beijing appears keen to present itself as acting decisively where Western regulators have hesitated.
Yet China’s approach raises its own questions. Requiring human intervention and guardian notification in sensitive conversations may protect some users, but it also risks deterring people—especially young people—from seeking help at all. And by placing responsibility squarely on companies to police behaviour and ideology, the state avoids addressing broader social pressures that drive people to AI companionship in the first place.
The CAC has invited public feedback on the draft rules, a gesture that suggests openness but rarely alters outcomes in practice. Once finalised, the regulations would mark another step in China’s effort to domesticate fast-moving technologies before they challenge existing power structures.
As with previous crackdowns on tech platforms, the message is clear: innovation is welcome, but only on the state’s terms—and only so long as it remains predictable, controllable and politically safe.
















