Liabooks Home|PRISM News
a16z-Backed Platform Sells Celebrity Deepfake Tools
TechAI Analysis

a16z-Backed Platform Sells Celebrity Deepfake Tools

4 min readSource

Stanford study reveals Andreessen Horowitz-funded AI marketplace facilitates creation of non-consensual deepfakes, with 90% targeting women celebrities.

A $5 million investment from Silicon Valley's prestigious Andreessen Horowitz helped fund an AI platform where custom instruction files for creating celebrity deepfakes are openly bought and sold—with 90% of requests targeting women.

The Marketplace Behind the Masks

Civitai presents itself as a legitimate marketplace for AI-generated content. But a new Stanford and Indiana University study reveals a darker reality lurking beneath the surface. Between mid-2023 and end-2024, researchers analyzed user "bounties"—requests for custom content—and found that the vast majority of deepfake requests specifically targeted real women.

The platform doesn't just trade finished deepfake images. It sells something more insidious: instruction files called LoRAs that can teach mainstream AI models like Stable Diffusion to generate content they weren't originally trained for. Think of them as custom recipes for creating non-consensual intimate imagery. 86% of deepfake requests were specifically for these LoRA files.

Users brazenly requested "high quality" models of influencers like Charli D'Amelio and singer Gracie Abrams, often linking directly to their social media profiles for image scraping. Some specified models that could generate full-body images, accurately capture tattoos, or allow hair color changes. The going rate? Between $0.50 and $5 per request—and 92% of bounties were successfully fulfilled.

Teaching the Dark Arts

Civitai doesn't just provide the infrastructure—it actively educates users on exploitation techniques. The platform hosts detailed tutorials on using external tools to manipulate AI outputs, including explicit guides on generating pornographic content. As researcher Matthew DeVerna from Stanford's Cyber Policy Center notes: "Not only does Civitai provide the infrastructure that facilitates these issues; they also explicitly teach their users how to utilize them."

The company's approach seems calculated. In May 2024, Civitai announced a ban on all deepfake content—but countless pre-ban requests remain live, and winning submissions are still available for purchase. When credit card processors cut ties due to non-consensual content concerns, the platform simply pivoted to gift cards and cryptocurrency payments.

Silicon Valley's Selective Ethics

Andreessen Horowitz invested in Civitai in November 2023, with CEO Justin Maier describing his vision of making AI model sharing "more and more approachable to more and more people." The irony is stark: Civitai joined OpenAI and Anthropic in 2024 adopting principles against AI-generated child sexual abuse material, following a 2023 Stanford Internet Observatory report linking the platform to such content.

Yet adult deepfakes receive vastly different treatment. "They are not afraid enough of it. They are overly tolerant of it," says University of Washington law professor Ryan Calo. "Neither law enforcement nor civil courts adequately protect against it. It is night and day."

The double standard extends beyond Civitai. MIT Technology Review previously reported that another a16z portfolio company, Botify AI, hosted AI companions resembling real actors under 18, engaging in sexually charged conversations and describing age-of-consent laws as "arbitrary."

The Venture Capital Blindspot

While tech companies enjoy broad legal protections under Section 230, those protections aren't limitless. "You cannot knowingly facilitate illegal transactions on your website," Calo explains. Yet venture capitalists seem remarkably comfortable funding platforms that skirt these ethical and legal boundaries.

The Civitai case reveals a troubling pattern: prestigious VCs chase returns while outsourcing ethical oversight to reactive content moderation. The platform's current approach—tagging deepfake bounties and offering manual takedown requests—places the burden on victims rather than preventing harm proactively.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles