Tech Coalition Spends Up to $25K to Fight NY AI Bill That Just Became Law
A tech coalition spent up to $25,000 on an ad campaign against New York's landmark AI safety bill (the RAISE Act), but the bill was signed into law anyway. Here's what it means for the future of AI regulation.
A coalition of tech companies and academic institutions spent tens of thousands of dollars on an ad campaign opposing New York's landmark AI safety bill, only for a version of it to be signed into law days ago. According to Meta's Ad Library, the campaign cost between $17,000 and $25,000 in the past month and may have reached more than two million people.
The closely watched bill, officially called the Responsible AI Safety and Education Act (RAISE Act), was recently signed by New York Governor Kathy Hochul. The law mandates that companies developing large-scale AI models—such as OpenAI, Anthropic, Meta, and Google—must establish and report on clear safety plans and transparency protocols.
The ad blitz represents a significant, public-facing effort to influence AI legislation. As reported by The Verge, the campaign ran on Meta's platforms, aiming to shape public opinion against the bill. Despite the financial investment and its broad reach of over two million users, the effort ultimately didn't prevent the bill from becoming law, signaling a win for regulators in a key battleground state.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Apple names John Ternus, its hardware engineering chief, as the next CEO. The shift from operator to product person signals where Apple thinks its next decade of growth will come from — and raises real questions about what comes next.
After 14 years and a run that turned Apple into a $4 trillion company, Tim Cook steps down as CEO. Hardware chief John Ternus takes over September 1. Here's what changes—and what doesn't.
Florida is investigating OpenAI over alleged links to a mass shooting. As AI firms quietly restrict their most powerful tools, a harder question is taking shape: who's legally responsible when AI helps someone plan violence?
Florida's AG is investigating OpenAI over a campus shooting, child safety risks, and national security concerns. What it means for AI regulation in America.
Thoughts
Share your thoughts on this article
Sign in to join the conversation