AI's Dark Side: Mainstream Chatbots Are Being Used to 'Undress' Women in Photos
Mainstream AI chatbots like Google's Gemini and OpenAI's ChatGPT are being used to create nonconsensual bikini deepfakes of women with simple prompts, bypassing safety features and raising urgent questions about AI ethics and corporate responsibility.
Users of popular AI chatbots, including Google's Gemini and OpenAI's ChatGPT, are generating nonconsensual bikini deepfakes from photos of fully clothed women. According to a WIRED investigation, these users are bypassing built-in safety guardrails with simple, plain-English prompts and are even sharing tips on how to do so in online communities.
From Saris to Bikinis: The Underground 'Jailbreak' Communities
The issue was starkly illustrated in a now-deleted Reddit post where a user uploaded a photo of a woman in a traditional Indian sari, asking for someone to “remove” her clothes and “put a bikini” on her. Another user fulfilled the request with a deepfake image. After WIRED notified Reddit, the company's safety team removed both the request and the resulting image.
“Reddit's sitewide rules prohibit nonconsensual intimate media,” a spokesperson said. The subreddit where the discussion took place, r/ChatGPTJailbreak, had over 75,000 followers before Reddit banned it. As generative AI tools proliferate, so does the harassment of women through nonconsensual deepfake imagery.
Corporate Guardrails vs. On-the-Ground Reality
With the notable exception of xAI's Grok, most mainstream chatbots have guardrails to prevent the generation of NSFW images. Yet users continue to find workarounds. In its own limited tests, WIRED confirmed it was possible to transform images of fully clothed women into bikini deepfakes on both Gemini and ChatGPT using basic prompts.
A Google spokesperson stated the company has "clear policies that prohibit the use of [its] AI tools to generate sexually explicit content." Meanwhile, an OpenAI spokesperson acknowledged the company loosened some guardrails this year for adult bodies in nonsexual situations but stressed that its policy prohibits altering someone's likeness without consent, with penalties including account bans.
The Accountability Question
Corynne McSherry, a legal director at the Electronic Frontier Foundation (EFF), views “abusively sexualized images” as one of the core risks of AI image generators. She argues that the focus must be on how the tools are used, and on “holding people and corporations accountable” when harm is caused.
The dynamic between AI developers implementing safety features and users dedicated to subverting them has become a reactive cat-and-mouse game. This constant 'jailbreaking' highlights a fundamental truth: technical guardrails alone are insufficient. The incident underscores a growing trust deficit and puts pressure on corporations to move beyond policy statements toward more robust, proactive enforcement and accountability.
本コンテンツはAIが原文記事を基に要約・分析したものです。正確性に努めていますが、誤りがある可能性があります。原文の確認をお勧めします。
関連記事
グーグルGeminiやOpenAIのChatGPTといった生成AIを悪用し、同意なく女性の写真をビキニ姿のディープフェイクに加工する問題が深刻化。レディットでの事例や各社の対応、そして技術倫理の課題を解説します。
Google DeepMindのCEOが「恥ずかしい」と評した、OpenAIのGPT-5による「数学の未解決問題解決」騒動。AI業界の誇大広告(ハイプ)の実態と、真の技術的進歩を見極めるための視点を解説します。
OpenAIが、ユーザーの1年間のChatGPT利用状況を可視化する新機能「Your Year with ChatGPT」を発表。統計データやユニークな「称号」、AI生成の詩で2025年を振り返ります。利用方法や対象地域を解説。
日本ではクマの出没が社会問題化する中、AIを活用した早期警戒システム「Bアラート」が導入されています。リアルタイム検知と自動通知で、住民の安全をどう守るのか、その仕組みと背景を分かりやすく解説します。