OpenAI Reports 80-Fold Spike in Child Exploitation Flags Amid Explosive User Growth
OpenAI's reports of child exploitation to NCMEC surged 80x in the first half of 2025. The company cites user growth and better detection, but the spike highlights the immense content moderation and safety challenges facing the rapidly scaling AI industry.
OpenAI sent 80 times as many child exploitation incident reports to a key U.S. clearinghouse during the first half of 2025 as it did in the same period a year earlier, a dramatic increase the company attributes to platform growth and enhanced detection.
According to a recent company update, OpenAI filed 75,027 reports with the National Center for Missing & Exploited Children (NCMEC) CyberTipline between January and June 2025. That number is a staggering leap from the 947 reports it filed in the first half of 2024. U.S. law mandates that companies report apparent child sexual abuse material (CSAM) to the NCMEC, which then forwards credible reports to law enforcement for investigation.
An OpenAI spokesperson, Gaby Raila, said in a statement that the company made investments late in 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also cited “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products” as contributing factors. This aligns with an August announcement from ChatGPT head Nick Turley, who said the app’s weekly active users had quadrupled year-over-year.
It's important to note that a spike in reports doesn't automatically mean a spike in illegal activity. It can also reflect changes in a platform’s automated moderation or more stringent reporting criteria. In the first half of 2024, OpenAI's 947 reports were about 3,252 pieces of content; in the first half of 2025, the 75,027 reports corresponded to 74,559 pieces of content, indicating a marked rise in both metrics.
This trend is not isolated to OpenAI. NCMEC has observed a broader surge in reports involving generative AI. The center’s own analysis found a 1,325 percent increase in such reports between 2023 and 2024, highlighting a growing challenge across the industry.
OpenAI’s update arrives as the company and its competitors face intense scrutiny over child safety. This past summer, 44 state attorneys general issued a joint letter to major AI firms, warning they would “use every facet of our authority to protect children.” Both OpenAI and Character.AI are also facing lawsuits from families who allege the chatbots contributed to their children’s deaths. This fall, the U.S. Senate Judiciary Committee held a hearing on the harms of AI, and the Federal Trade Commission launched a market study into AI companion bots, which included questions about mitigating risks to children.
In response, OpenAI has rolled out new safety tools. In September, it launched parental controls for ChatGPT, allowing parents to link accounts with their teens and disable features like image generation and voice mode. Following negotiations with the California Department of Justice, OpenAI agreed in October to continue measures to mitigate risks to teens. The next month, it released a “Teen Safety Blueprint,” reaffirming its commitment to detecting and reporting CSAM.
**PRISM Insight:** This staggering increase isn't just a moderation challenge; it's a new financial and ethical stress test for the entire generative AI industry. As models become more powerful and multimodal, the cost of policing user-generated content at scale could become a defining factor in the race for AGI, forcing a difficult conversation about who bears the ultimate responsibility for AI-generated harm.
본 콘텐츠는 AI가 원문 기사를 기반으로 요약 및 분석한 것입니다. 정확성을 위해 노력하지만 오류가 있을 수 있으며, 원문 확인을 권장합니다.
관련 기사
OpenAI가 2025년 상반기 NCMEC에 제출한 아동 착취 신고 건수가 전년 동기 대비 80배 폭증했습니다. 사용자 증가와 기능 확장이 원인으로 꼽히는 가운데, AI 산업의 안전 책임 문제가 수면 위로 떠오르고 있습니다.
2025년 OpenAI는 '코드 레드' 상황 속에서 GPT-5.2를 출시하고 디즈니와 10억 달러 계약을 맺는 등 공세에 나섰지만, 동시에 심각한 저작권 및 안전성 소송에 직면했다. PRISM이 격동의 한 해를 심층 분석한다.
OpenAI의 최신 영상 생성 AI '소라 2'로 만든 가짜 아동용 장난감 광고가 틱톡에서 논란입니다. 성인용품을 연상시키는 이 영상은 AI가 어떻게 아동 착취물 제작에 악용될 수 있는지 보여주며, 기술의 윤리적 딜레마와 콘텐츠 관리의 한계를 드러냈습니다.
OpenAI가 챗GPT의 '따뜻함'과 '열정' 등 감성 톤을 직접 조절하는 기능을 출시했습니다. 이는 사용자 경험 혁신이자 AI 윤리 논란에 대한 응답입니다.