Wisconsin Said No to Porn Age Checks. Here's Why That's a Harder Call Than It Sounds.
Wisconsin Governor Tony Evers vetoed an age verification bill for adult sites, citing privacy concerns. With 25+ states going the other way, the debate cuts to the heart of online freedom vs. child protection.
Show your government ID to watch porn — or don't watch at all. More than 25 states have already made that the law. Last week, Wisconsin decided it wasn't willing to go there.
What Happened
Wisconsin Governor Tony Evers vetoed AB 105, a bill that would have required any website with more than one-third of its content classified as harmful to minors to implement a "reasonable" age verification method — think government-issued ID — before granting access.
In his letter to the state assembly, Evers didn't mince words: the bill "imposes an intrusive burden on adults who are trying to access constitutionally protected materials." He acknowledged the intent behind the legislation but rejected the mechanism as the wrong tool for the job.
It's a notable break from the national trend — and one that forces a genuinely uncomfortable question about what online child protection actually looks like in practice.
The Law That Backfired Elsewhere
The age verification wave started in Louisiana in 2023, and it spread fast. Texas, Florida, and a string of other states followed. The logic was straightforward: if a liquor store has to card you, why shouldn't a porn site?
But the real-world results complicated the narrative. Pornhub — the world's largest adult platform — responded to these laws not by complying, but by blocking access entirely in states that required verification. Users didn't stop looking for content. They migrated to smaller, less regulated sites with no age checks at all, or fired up a VPN. The law designed to protect kids may have pushed them toward darker, less moderated corners of the internet.
This is the inconvenient data point hovering over every state legislature considering similar bills. Evers' veto implicitly acknowledges it: a law that exists on paper isn't the same as protection that works.
The Privacy Problem Nobody Wants to Talk About
Here's what the child-safety framing often leaves out: age verification requires adults to hand over identifying information to private companies. That data — who accessed what, when, from where — sits on servers. Servers get hacked. Data gets sold. Governments issue subpoenas.
The ACLU has sued multiple states over these laws, arguing they create a chilling effect on constitutionally protected speech. If people know their identity is logged every time they access legal adult content, many will simply stop — not because the content is illegal, but because the surveillance cost feels too high. That, civil liberties advocates argue, is a First Amendment problem dressed up as a safety measure.
Privacy researchers have flagged an even longer-term concern: age verification infrastructure, once built, doesn't stay in one lane. The same systems that verify age for adult content can be repurposed. Today it's porn; tomorrow it could be political content, health information, or anything a future legislature decides requires a gatekeeper.
Two Sides, Both With a Point
Supporters of age verification laws aren't wrong that the status quo is broken. The American Academy of Pediatrics and a growing body of research link early, unrestricted exposure to pornography with distorted sexual expectations and relationship difficulties in adolescents. A 13-year-old with a smartphone has, functionally, unrestricted access to content that would have required significant effort to obtain a generation ago. That's a real problem.
Opponents aren't wrong either. The question isn't whether to protect kids — everyone agrees on that goal — but whether this particular mechanism achieves it without creating new harms. A law that drives users to less regulated platforms, while building a surveillance architecture for legal adult behavior, may be solving the visible problem while creating invisible ones.
Notably, Big Tech has stayed conspicuously quiet on this debate. Meta and Google aren't directly targeted by these laws, but the industry is watching. If "harmful to minors" becomes an expansive legal standard, social media platforms, gaming sites, and news outlets with mature content could all find themselves next in line.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
A federal judge blocked the Pentagon's blacklisting of Anthropic, ruling that punishing a company for public criticism of government policy is a textbook First Amendment violation.
Anthropic filed two sworn declarations challenging the Pentagon's claim that it poses a national security risk. The timeline they reveal raises uncomfortable questions about the government's real motives.
Anthropic filed suit against the Trump administration after being blacklisted for refusing to let Claude be used for autonomous warfare and mass surveillance. The First Amendment is now at the center of AI safety law.
Anthropic filed suit against the Trump administration after being designated a supply-chain risk — allegedly for refusing to let its AI be used for autonomous weapons and mass surveillance.
Thoughts
Share your thoughts on this article
Sign in to join the conversation