Liabooks Home|PRISM News
When AI Becomes the Government's Gatekeeper
TechAI Analysis

When AI Becomes the Government's Gatekeeper

5 min readSource

The US Department of Health and Human Services is using Palantir AI to screen grants for DEI and gender ideology content. What happens when algorithms decide who gets federal funding?

$1 billion. That's how much the US government paid Palantir in Trump's first year back in office. But here's what wasn't in the press releases: much of that money went toward building AI systems that scan government documents for words like "female," "inclusion," and "transgender"—then flag them as potentially problematic.

The Secret Screener

Since March 2024, the Department of Health and Human Services has been quietly using Palantir's AI tools to audit grants, applications, and job descriptions for anything that might violate Trump's executive orders targeting "gender ideology" and diversity, equity, and inclusion (DEI) programs. This revelation only came to light through a recently published inventory of HHS's AI use cases—not through any public announcement.

The screening happens within HHS's Administration for Children and Families, which oversees foster care and adoption systems. Palantir is the sole contractor tasked with identifying "position descriptions that may need adjustment" for compliance with the executive orders. Meanwhile, startup Credal AI—founded by two Palantir alumni—helps audit existing grants and new applications using what the inventory calls an "AI-based" review process.

The system works like this: AI scans documents and generates "initial flags and priorities for discussion." Human staffers then conduct a "final review" of anything the AI deems suspicious. It's currently deployed and actively screening applications across the agency.

What makes this particularly striking is the secrecy. Neither Palantir nor HHS announced these specific uses. The $35 million HHS paid Palantir last year? The contract descriptions mention nothing about DEI or "gender ideology" screening. The $750,000 paid to Credal AI for their "Tech Enterprise Generative AI Platform"? Same silence.

The Ripple Effect

The consequences have been swift and sweeping. The National Science Foundation began flagging research containing terms like "female," "systemic," or "underrepresented." The CDC started retracting studies that mentioned "LGBT" or "nonbinary." The 988 Suicide & Crisis Lifeline removed its LGBTQ youth service line entirely.

By year's end, nearly $3 billion in grant funds were frozen or terminated across the NSF and National Institutes of Health. Layoffs hit multiple agencies—Education, Energy, Personnel Management—sometimes affecting workers who had nothing to do with DEI. NASA employees were reassigned from their regular duties to scrub mentions of women, indigenous people, and LGBTQ individuals from the agency website.

In the private sector, over 1,000 nonprofits rewrote their mission statements to avoid losing federal funding. Organizations like the National Center for Missing and Exploited Children and the Rape, Abuse & Incest National Network removed all references to transgender people—despite trans individuals being disproportionately likely to be victims of sexual abuse.

Palantir's Golden Year

While civil rights groups raised alarms, Palantir thrived. The company earned over $1 billion in federal payments during Trump's first year, a 24% increase from the previous year. The Army ($408 million) and Air Force ($148 million) were its biggest customers.

But perhaps most controversially, Palantir's work with Immigration and Customs Enforcement expanded dramatically. ICE payments jumped from $20.4 million to $81 million—a nearly 300% increase. In April, ICE added $30 million to an existing contract for Palantir to build a tool providing "near real-time visibility" on people self-deporting and helping ICE prioritize deportation targets.

The company also maintains ICE's Investigative Case Management System, the FALCON Search & Analysis System (which includes data from ICE's public tip line), and an app called ELITE that creates detailed dossiers on potential suspects, complete with confidence scores for their likely locations.

Even Palantir employees have questioned the company's ICE work on internal Slack channels, pushing for more transparency following the fatal shooting of Minneapolis nurse Alex Pretti during a January ICE operation.

The Algorithm's Judgment

What's particularly unsettling about this AI screening isn't just its scope—it's its opacity. When algorithms make initial determinations about what constitutes "problematic" content, they're essentially encoding policy preferences into code. The AI doesn't just find DEI-related content; it learns patterns about what human reviewers consider objectionable, potentially amplifying biases in ways that are difficult to detect or challenge.

This raises fundamental questions about democratic governance. Should AI systems be making preliminary judgments about which research deserves funding or which job descriptions are "appropriate"? When these systems operate in secret, how can we ensure they're not systematically excluding legitimate research or discriminating against qualified applicants?

The Palantir case also highlights the growing influence of private tech companies in government operations. The same company that helps ICE track immigrants and the military manage operations is now screening academic research and social programs. This convergence of surveillance, enforcement, and policy implementation under one corporate umbrella represents a new form of privatized governance.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles