US Border Patrol Pays $225K for Access to 60 Billion Scraped Faces
CBP contracts with Clearview AI for facial recognition access to billions of internet-scraped images, raising concerns about routine surveillance infrastructure and civil liberties
60 Billion Faces Under Digital Surveillance
The US Customs and Border Protection just signed a $225,000 contract with Clearview AI, granting access to a facial recognition system built from over 60 billion images scraped from the internet without consent.
This isn't just another tech procurement. The deal extends Clearview's reach into CBP's intelligence headquarters and National Targeting Center—units that conduct daily "tactical targeting" and "strategic counter-network analysis." Translation: facial recognition is becoming routine intelligence infrastructure, not an exceptional investigative tool.
The Scraping Machine
Clearview AI's business model sits at the heart of the controversy. The company harvests photos from public websites at massive scale, converting them into biometric templates without the knowledge or consent of the people photographed.
Think about it: every photo you've ever posted publicly, every image a friend tagged you in, every picture from a news article—all potentially feeding this surveillance apparatus.
Beyond the Border
The timing is telling. The Department of Homeland Security faces mounting scrutiny for deploying facial recognition in large-scale operations across US cities, far from any border. American citizens are getting swept up in these digital dragnets.
Last week, Senator Ed Markey introduced legislation to ban ICE and CBP from using facial recognition altogether, citing concerns that "biometric surveillance is being embedded without clear limits, transparency, or public consent."
When AI Gets It Wrong
Recent testing by the National Institute of Standards and Technology reveals a troubling reality. While facial recognition works well on "high-quality visa-like photos," it falters dramatically in real-world conditions. At actual border crossings, error rates often exceed 20 percent.
Here's the kicker: NIST found that when systems are configured to always return candidates for human review, searches for people not in the database will still generate "matches." In those cases, the results are 100 percent wrong—every single time.
The Accountability Gap
CBP's contract doesn't specify what kinds of photos agents will upload, whether searches may include US citizens, or how long data will be retained. The agency states its Traveler Verification System doesn't use "commercial sources or publicly available data"—but that leaves plenty of room for Clearview integration elsewhere in CBP's sprawling surveillance ecosystem.
Civil liberties advocates worry we're witnessing the normalization of mass biometric surveillance without democratic oversight or constitutional safeguards.
The technology exists. The contracts are signed. The only question left is whether we'll demand accountability before it's too late.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
The Department of Homeland Security is consolidating biometric technologies into a unified system, raising concerns about political surveillance and civil liberties as the tech spreads beyond borders.
Consumer advocacy group offers $10,000 to integrate Ring doorbells with local servers, bypassing Amazon's cloud entirely. Can smart home devices escape subscription dependency?
Governments worldwide are mandating social media age checks, but the technology is easily fooled, creating new privacy risks while failing to protect children effectively.
Ring's Super Bowl ad sparked backlash over AI surveillance capabilities. The company canceled its Flock Safety partnership amid concerns about privacy and ICE data sharing.
Thoughts
Share your thoughts on this article
Sign in to join the conversation