Liabooks Home|PRISM News
Digital padlock integrated with a neural network
TechAI Analysis

South Korea KMCC X Grok AI Safety Measures 2026: Cracking Down on Deepfakes

2 min readSource

The KMCC has requested X to implement safety measures for its AI model, Grok, to protect minors from deepfake sexual content. Learn more about South Korea's latest AI regulations.

Elon Musk's AI ambitions are hitting a regulatory wall in Seoul. According to Reuters and Yonhap on January 14, 2026, the Korea Media and Communications Commission (KMCC) has formally asked X to implement robust measures to protect minors from sexual content generated by its AI model, Grok.

KMCC X Grok AI Safety Measures Demand

The watchdog's request stems from growing alarm over deepfake sexual content facilitated by advanced AI platforms. The KMCC emphasized that X must prevent potential illegal activities on Grok and submit a plan to manage or limit teenage access to harmful materials. Under South Korean law, social media operators are required to designate a minor protection official and provide an annual report on their safety efforts.

KMCC Chairperson Kim Jong-cheol stated that while they support the sound development of new technologies, they won't overlook negative side effects. He noted that creating or circulating non-consensual sexual deepfakes is a criminal offense. The commission plans to revamp policies to ensure AI providers take full responsibility for protecting young users from exploitation.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Related Articles