Liabooks Home|PRISM News
US Immigration Agents Use Face Recognition App That Can't Actually Verify Identity
TechAI Analysis

US Immigration Agents Use Face Recognition App That Can't Actually Verify Identity

4 min readSource

DHS's Mobile Fortify app, used 100,000 times since launch, scans faces of immigrants, citizens, and protesters despite inability to confirm identities. Privacy concerns mount over biometric data collection.

100,000 times. That's how often US immigration agents have used Mobile Fortify, a facial recognition app deployed by the Department of Homeland Security, since its launch last spring. But here's the catch: the app can't actually verify anyone's identity.

Despite DHS repeatedly describing Mobile Fortify as a tool for "determining or verifying" identities during federal operations, records obtained by WIRED reveal the app only generates possible matches—not confirmations. It's a distinction that matters enormously when people's freedom hangs in the balance.

When "Maybe" Becomes "Probable Cause"

The app's limitations played out dramatically in an Oregon courtroom, where an agent testified about scanning a handcuffed woman twice and getting different results each time. The first scan suggested a woman named "Maria"—a match the agent rated as "a maybe." When agents called out "Maria, Maria" and got no response, they took another photo. The second result was merely "possible," the agent admitted.

"It's just an image, your honor," the agent testified. "You have to look at the eyes and the nose and the mouth and the lips." No confidence scores. No clear thresholds. Just human judgment applied to algorithmic suggestions.

"Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification," says Nathan Wessler, deputy director of the ACLU's Speech, Privacy, and Technology Project. "It makes mistakes, and it's only for generating leads."

Citizens and Protesters in the Crosshairs

Perhaps more troubling is who else gets scanned. Mobile Fortify has captured faces not only of "targeted individuals" but also US citizens and people observing or protesting enforcement activities. Agents have told citizens they were being recorded and that their faces would be added to databases without consent.

Reports describe agents using accent, perceived ethnicity, or skin color as grounds to initiate encounters—then deploying face scanning as the next step. It's a pattern that transforms routine street encounters into biometric data collection operations, hundreds of miles from any border.

Senator Ed Markey warned this week that DHS has deployed an "arsenal of surveillance technologies" to monitor "both citizens and noncitizens alike," calling it "the stuff of nightmares." Internal directives reportedly instruct agents to collect images and personal information on protesters and bystanders.

A 15-Year Digital Shadow

The data collected through Mobile Fortify doesn't just disappear. It's stored in databases linked through the Automated Targeting System (ATS), where it can persist for up to 15 years—or longer if shared with other agencies.

There's also the mysterious "Fortify the Border Hotlist"—a watch list fed by the app's data. The criteria for inclusion, removal process, and whether US citizens can be added remain undisclosed. It's surveillance infrastructure being built in real-time, with little transparency about its scope or safeguards.

While CBP claims photos of US citizens who opt out of biometric identification at ports of entry are deleted within a day, no such protections appear to exist for data collected during street encounters.

The Speed vs. Accuracy Trade-off

The app's design reveals a troubling priority: speed over accuracy. Agents are instructed to photograph subjects for facial recognition before attempting fingerprint matches—even though fingerprints are far more reliable for confirming identity. The sequence prioritizes "ease of collection over positive identification," as internal records put it.

When images are taken in uncontrolled conditions—different lighting, angles, or expressions—the mathematical templates can shift dramatically, reshuffling potential matches. Demanding real-time results requires smaller candidate pools, increasing the likelihood of false matches or missed identifications.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles