Flock's AI Camera Network Exposed: Over 60 Live Feeds Found Unsecured on the Web
A major security flaw has exposed live feeds from over 60 of Flock's AI-powered surveillance cameras on the open web, no password required. The discovery highlights the growing privacy risks of rapidly expanding AI surveillance networks used by law enforcement.
A significant security lapse has exposed the live feeds from stat(more than 60) of Flock's keyword(AI)-powered surveillance cameras, making them viewable on the open web without a username or password. The vulnerability was discovered by tech YouTuber Benn Jordan and first reported by 404 Media, raising serious questions about the security of a network used by thousands of law enforcement agencies across the United States.
No Password Required: The Scope of the Breach
According to the findings, the livestreams were accessible to anyone who had the direct web address, requiring no authentication to view real-time footage from each location. This wasn't a sophisticated hack, but rather a fundamental failure in securing the camera feeds. Flock, a technology company that provides keyword(AI)-driven vehicle recognition, has rapidly expanded its powerful surveillance network, creating a vast web of interconnected cameras.
The company's reach was recently extended through a partnership with Ring, Amazon's smart doorbell company. This collaboration gives Flock customers the ability to request footage from users of Ring's Neighbors app, effectively blending a private surveillance network with public law enforcement tools. This breach underscores the potential dangers as these powerful networks grow.
The High Stakes of AI-Powered Surveillance
Flock markets its technology as a tool to help law enforcement solve crimes more efficiently. However, privacy advocates have long warned about the risks of creating such a widespread, privately-operated surveillance infrastructure. An incident like this, where live feeds are left open for anyone to see, validates concerns that technical flaws can lead to direct and severe privacy violations for entire communities.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Cohere and Aleph Alpha are merging to build a transatlantic AI challenger valued at $20 billion. Their pitch: sovereignty, not just performance. Can it work?
Google is committing up to $40 billion to Anthropic, a direct AI competitor. The deal reveals how the real AI arms race isn't about models — it's about who controls the infrastructure beneath them.
North Korean hackers used ChatGPT, Cursor, and AI web tools to steal $12M in crypto in 90 days—without knowing how to code. What this means for cybersecurity's future.
Anthropic's AI cybersecurity model is reportedly available to the NSA and Commerce Department—but not to CISA, the agency responsible for defending US federal infrastructure. What that gap reveals.
Thoughts
Share your thoughts on this article
Sign in to join the conversation