The Dark Side of AI: How VCs Are Funding Deepfake Markets
Stanford research reveals that an AI marketplace backed by Andreessen Horowitz enables custom deepfake creation targeting real women, with 90% of requests focusing on females.
90% of deepfake requests target women. This isn't happening in some dark corner of the internet—it's the reality inside an AI marketplace backed by one of Silicon Valley's most prestigious venture capital firms.
When Venture Capital Meets Digital Exploitation
Civitai markets itself as a legitimate platform for AI-generated content, complete with backing from Andreessen Horowitz, the venture capital powerhouse behind Facebook, Twitter, and countless other tech giants. On the surface, it appears to be a creative marketplace where users share AI models and tools. Beneath that veneer lies something far more troubling.
Researchers from Stanford and Indiana University spent 18 months analyzing the platform's "bounty" system, where users request custom AI instruction files. While most requests were for animated content, a significant portion targeted real people—and 90% of these deepfake requests focused on women.
These weren't just casual requests. Many were specifically designed to circumvent the platform's own ban on pornographic content, creating what researchers describe as "bespoke" instruction files for generating explicit images of real women, including celebrities and private individuals.
The Economics of Digital Violation
What makes this particularly insidious is the commercialization aspect. Users aren't just creating these tools for personal use—they're selling them. The platform has essentially created a marketplace for digital violation, complete with customer reviews, ratings, and profit margins.
The involvement of Andreessen Horowitz raises uncomfortable questions about due diligence in venture capital. The firm, which has invested in companies worth hundreds of billions of dollars, appears to have overlooked—or ignored—how its investment was being used. This isn't just about one platform; it's about the broader responsibility of investors in the AI ecosystem.
Consider the ripple effects: When prestigious VCs back platforms that enable harmful content, they're not just providing capital—they're providing legitimacy. Other entrepreneurs see this as validation that such business models are acceptable, potentially spawning copycats across the industry.
The Regulation Paradox
Civitai technically prohibits pornographic content, but the research shows how easily these rules are circumvented. Users employ coded language, indirect requests, and technical workarounds to obtain what they want. It's a perfect example of how platform policies often fail to match platform realities.
This highlights a broader challenge in AI governance: How do you regulate tools that can be used for both legitimate and harmful purposes? The same technology that powers Civitai's problematic content could also revolutionize film production, education, or digital art. Banning the technology entirely would stifle innovation; allowing it unchecked enables abuse.
European regulators are already grappling with these questions through the AI Act, while several U.S. states have introduced legislation targeting non-consensual deepfakes. But enforcement remains challenging, especially when platforms operate across international boundaries.
The Human Cost of Innovation
Behind every deepfake request is a real person—often a woman—whose image and identity are being commodified without consent. The psychological impact on victims can be devastating, affecting their personal relationships, professional opportunities, and mental health.
Yet the current system treats this as a technical problem rather than a human rights issue. Platform terms of service, content moderation algorithms, and venture capital due diligence processes all focus on scale and efficiency rather than individual harm.
Authors
Related Articles
AI sustainability researcher Sasha Luccioni is launching a new venture to push for energy transparency in AI. Here's why Big Tech keeps the numbers hidden—and what's starting to change.
Moonshot AI raised $2B at a $20B valuation. Its Kimi models rank second on OpenRouter. What China's open-weight AI surge means for the global LLM market.
QuTwo, the Finnish AI lab led by former AMD Silo AI CEO Peter Sarlin, raised a $29M angel round at a $380M valuation — deliberately avoiding VC money. Here's the logic behind that bet.
AI is reshaping how citizens know, act, and deliberate together. Three researchers argue democracy's infrastructure wasn't built for this—and the design choices are already being made.
Thoughts
Share your thoughts on this article
Sign in to join the conversation