Liabooks Home|PRISM News
The AI That Recommends Who to Bomb First
TechAI Analysis

The AI That Recommends Who to Bomb First

5 min readSource

The Pentagon is exploring using generative AI chatbots to rank and prioritize military strike targets. As a US missile strike kills over 100 children at an Iranian school, questions about AI's role in targeting decisions grow urgent.

More than 100 children died when a missile struck a girls' school in Iran. Multiple outlets reported the missile was American. The Pentagon said it's still investigating. And according to The New York Times, a preliminary review found that outdated targeting data was partly to blame. The question now hanging over Washington: was a generative AI chatbot part of the chain that produced that data?

The Chatbot in the War Room

A Defense Department official, speaking on background with MIT Technology Review, described a scenario that until recently would have sounded like science fiction. A list of potential targets is fed into a generative AI system. A human then asks it to analyze the targets and rank them—factoring in variables like current aircraft positions. Humans review the output and make the final call.

The official framed this as an illustrative example and declined to confirm or deny whether it reflects current operations. But the architecture is already in place. Anthropic's Claude has been integrated into existing military systems and reportedly used in operations in Iran and Venezuela. OpenAI signed an agreement on February 28 for its models to be used in classified Pentagon settings. Elon Musk's xAI followed with a similar deal for its Grok model.

This is layered on top of Maven, the Pentagon's 2017-era AI initiative built on computer vision—the older, narrower kind of AI that can scan thousands of hours of drone footage and flag potential targets on a battlefield map. Maven was already accelerating targeting timelines. Generative AI, the official said, is now being added as a conversational layer on top of that—a way to query, summarize, and prioritize information faster.

Two AIs, Two Very Different Problems

Maven and ChatGPT-style models are fundamentally different technologies, and that distinction matters.

Maven's interface put raw data in front of soldiers on a map. They had to look at it, interpret it, and decide. The friction was intentional—it forced human eyes onto the evidence. Generative AI works differently. It produces fluent, confident-sounding text. It's faster to read and easier to act on. But the reasoning behind its outputs is far harder to audit. When a model says "prioritize Target A," tracing exactly why—what data it weighted, what it ignored—is a challenge even for experts.

The official acknowledged that generative AI is reducing the time required in the targeting process. But when pressed on how much time is actually saved if humans must still double-check the model's outputs, no answer was given.

PRISM

Advertise with Us

[email protected]

That gap matters. Speed is the whole point. But speed and verification pull in opposite directions.

The Anthropic Fallout—and What It Signals

The most revealing subplot in this story isn't technical. It's political.

Anthropic tried to place restrictions on how the military could use its AI. The Pentagon responded by designating the company a supply chain risk. President Trump publicly demanded the government stop using Anthropic products within six months. Anthropic is now fighting the designation in court.

OpenAI signed its Pentagon deal with stated limitations—though the company has not clarified what those limits actually prevent. xAI made no such public qualifications.

The market signal is stark: the company that pushed back on military use lost the contract and got labeled a national security risk. The companies that said yes, or said yes with vague caveats, are now embedded in classified operations. For every AI lab watching this dynamic, the lesson is legible.

Whose Accountability Is It, Anyway?

For the military, generative AI solves a real problem. Modern warfare drowns commanders in data. If an AI can do the first pass—sorting, ranking, summarizing—human analysts can focus on higher-order judgment. That's the argument, and it's not without merit.

For AI companies, the ethics are increasingly uncomfortable. OpenAI and xAI have internal employees who have objected to military contracts. The companies have signed anyway. The question of whether a commercial AI lab can meaningfully constrain how a government uses its technology—once deployed in classified settings—remains largely unanswered.

For AI ethicists and legal scholars, the targeting chain raises questions about accountability that existing frameworks weren't designed to handle. If an AI recommends a target, a soldier approves it, a missile fires, and a school burns—who bears responsibility? The soldier? The model? The company that trained it? The official who approved the deployment?

For the general public, the Pentagon has been deploying generative AI to millions of service members for non-classified tasks since December 2024 through GenAI.mil. The same underlying technology now sits, in approved form, inside classified targeting workflows. The distance between "AI writes your PowerPoint" and "AI ranks your airstrike targets" is shorter than most people realize.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]
The AI That Recommends Who to Bomb First | Tech | PRISM by Liabooks