This $1,199 'Anti-Eavesdropping' Orb Has Everyone Talking
Harvard grad's AI-powered microphone jammer promises privacy protection but faces fierce technical skepticism. Why the debate reveals more than the device itself.
$1,199 for a device the size of a softball that promises to silence every microphone in the room. That's the pitch from Deveillance, a startup founded by Harvard graduate Aida Baradari, whose Spectre I audio jammer went viral this week—but not for the reasons she expected.
The reaction split the internet in half. Privacy advocates hailed it as "cyberpunk resistance tech" against always-listening AI wearables. Tech experts called it physics-defying snake oil.
When Physics Meets Wishful Thinking
The Spectre I isn't your grandfather's audio jammer. Instead of drowning out sound with noise, Baradari claims it uses AI-generated cancellation signals specifically designed to fool automatic speech recognition systems. The device combines ultrasonic frequency emitters with machine learning to not just block recordings, but detect and log nearby microphones.
"People should have a choice over what they want to share, especially in conversations," Baradari explains. "If we can't converse anymore without feeling scared of saying something that's potentially taken out of context, then how are we going to build human connection?"
But engineer Dave Jones isn't buying it. "They simply cannot do this," he wrote to WIRED. "They are using the classic trick of using wording to imply that it will detect every type of microphone, when all they are probably doing is scanning for Bluetooth audio devices."
The Always-Listening Anxiety
The viral response reveals something deeper than skepticism about a single gadget. We're living through what researcher John Scott-Railton calls a "Ring-like moment"—a sudden awareness of how pervasive recording devices have become.
From Amazon'sBee AI bracelet to the Friend pendant, AI wearables promise to capture and analyze our every word. Meanwhile, ICE builds surveillance systems around social media and phones, while Ring's Super Bowl ad about neighborhood camera networks sparked immediate backlash.
"People are kind of waking up to the idea that they may not have privacy at any given time," says musician and privacy advocate Benn Jordan.
The Technical Reality Check
Ultrasonic microphone jammers have existed since before the Cold War, but they face fundamental physics constraints. Make them powerful enough to work, and they're too bulky to be portable. Make them small, and they lack the juice to disrupt microphones effectively.
Melissa Baese-Berk, a linguistics professor at the University of Chicago, points out another flaw: "There's so much variation in people's voices. It's not the case that there's a specific signal that's like the 'voice signal.'"
The claim that Spectre I can detect nearby microphones via radio frequency emissions drew particular skepticism. "If you could detect and recognize components via RF the way Spectre claims to, it would literally be transformative to technology," Jordan notes. "You'd be able to do radio astronomy in Manhattan."
Beyond the Hype, A Hunger for Control
Baradari acknowledges the criticism but remains undeterred. "I actually appreciate those comments because they're making me think and see more things as well," she says. The device won't ship until late 2026, giving her team time to address technical challenges.
Whether Spectre I works as advertised may be less important than what its viral moment represents. Cooper Quintin from the Electronic Frontier Foundation sees the bigger picture: "It is nice to see a company creating something to protect privacy instead of working on new and creative ways to extract data from us."
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
xAI's failed legal challenge against California's AB 2013 reveals deeper tensions between AI innovation and public accountability
Grammarly's AI feature uses deceased academics and living experts without permission to provide writing advice, sparking privacy and consent concerns in the AI age.
The Defense Department designated Anthropic as a supply-chain risk, but Microsoft and Google confirmed they'll keep offering Claude to customers. A new chapter in Silicon Valley's military AI tensions.
The Pentagon's clash with AI companies exposes a legal gray area around mass surveillance. OpenAI and Anthropic's opposing choices reveal deeper questions about privacy in the AI age.
Thoughts
Share your thoughts on this article
Sign in to join the conversation