China AI Emotion Regulation 2025: World's First Rules to Curb Psychological Influence
China has proposed the world's first regulations to limit AI's influence on human emotions. New rules mandate security checks for large platforms and human intervention for mental health risks.
AI might be your new favorite companion, but China is ensuring it doesn't pull on your heartstrings too hard. In a landmark move, Beijing has proposed rules to prevent AI from manipulating human emotions, marking a significant shift from monitoring data to protecting mental health.
China AI Emotion Regulation 2025: From Content to Mental Safety
The Cyberspace Administration of China (CAC) recently released draft regulations targeting "human-like interactive AI services." These measures apply to any public AI product that simulates personality or engages users emotionally through text, audio, or video. The goal is clear: prevent tech from causing psychological harm.
- AI chatbots are banned from generating content that encourages suicide, self-harm, or emotional manipulation.
- In cases of suicide threats, providers must have a human take over the chat and notify guardians immediately.
- Minors require explicit guardian consent and will face strict usage time limits for emotional AI companions.
Security Thresholds for AI Giants
The rules introduce specific scale-based requirements. Any platform with over 1 million registered users or 100,000 monthly active users (MAU) must undergo mandatory security assessments. Furthermore, services must issue a reminder after 2 hours of continuous interaction to prevent over-reliance.
This regulatory hammer falls just as Chinese AI unicorns Minimax and Z.ai (Zhipu) prepare for their IPOs in Hong Kong. Minimax, which boasts over 20 million MAUs on its emotional chat apps, now faces a complex compliance roadmap before hitting the public market.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
From the multi-billion dollar chase for 'superintelligence' to the unsettling rise of 'chatbot psychosis' and 'slop', these were the key AI terms that shaped a turbulent 2025 in technology.
In 2025, the gaming industry is split over generative AI. Major studios are rushing to adopt it, while indie developers are pushing back with 'AI-free' labels. What does this conflict mean for the future of games?
OpenAI reported an 80-fold increase in child exploitation reports sent to NCMEC in the first half of 2025. The spike may reflect improved AI detection rather than just a rise in illegal activity.
Mainstream AI chatbots like Google's Gemini and OpenAI's ChatGPT are being used to create nonconsensual bikini deepfakes of women with simple prompts, bypassing safety features and raising urgent questions about AI ethics and corporate responsibility.