OpenAI Reports 80x Spike in Child Exploitation Flags in H1 2025, Highlighting AI's Double-Edged Sword
OpenAI reported an 80-fold increase in child exploitation reports sent to NCMEC in the first half of 2025. The spike may reflect improved AI detection rather than just a rise in illegal activity.
A staggering new statistic reveals the growing challenge of policing AI platforms. OpenAI sent 80 times as many child exploitation incident reports to a national watchdog in the first half of 2025 as it did during the same period in 2024. The disclosure, from a recent company update, puts a sharp focus on the escalating issue of platform safety in the age of generative AI.
What's Behind the 80x Surge?
According to OpenAI, the reports were filed with the National Center for Missing & Exploited Children (NCMEC). U.S. law requires companies to report apparent child sexual abuse material (CSAM) and other forms of child exploitation to NCMEC'sCyberTipline, which acts as a centralized clearinghouse. NCMEC then reviews the tips and forwards them to the appropriate law enforcement agencies for investigation.
Better Detection or a Growing Threat?
However, a spike in reports doesn't necessarily mean there's an equivalent rise in nefarious activity. Statistics related to NCMEC reports can be nuanced. An increase can also indicate that a platform’s automated moderation has improved or that its internal criteria for making a report have changed. In other words, OpenAI might not be hosting more illegal content—it might just be getting much better at finding it.
本コンテンツはAIが原文記事を基に要約・分析したものです。正確性に努めていますが、誤りがある可能性があります。原文の確認をお勧めします。
関連記事
OpenAIからNCMECへの児童搾取インシデント報告が2025年上半期に前年比80倍に急増。報告増の背景にあるAI監視技術の進化と、プラットフォームが直面する倫理的課題を解説します。
ベストセラー『バッド・ブラッド』の著者ジョン・キャリルー氏らが、OpenAIやGoogleなどAI大手6社を著作権侵害で提訴。先の和解案に不満を持つ作家たちが、AIのデータ利用倫理を問う。
グーグルGeminiやOpenAIのChatGPTといった生成AIを悪用し、同意なく女性の写真をビキニ姿のディープフェイクに加工する問題が深刻化。レディットでの事例や各社の対応、そして技術倫理の課題を解説します。
Google DeepMindのCEOが「恥ずかしい」と評した、OpenAIのGPT-5による「数学の未解決問題解決」騒動。AI業界の誇大広告(ハイプ)の実態と、真の技術的進歩を見極めるための視点を解説します。