US Transportation Dept Plans AI-Written Safety Rules
The Department of Transportation wants Google Gemini to draft airplane, car, and pipeline safety regulations in 30 minutes. What could go wrong with AI hallucinations in safety rules?
30 minutes. That's how long it takes Google Gemini to draft safety regulations for airplanes, cars, and pipelines, according to the US Department of Transportation's top lawyer. What used to take weeks or months could soon be done in 30 days or less.
A ProPublica investigation revealed Monday that the DOT is planning to become the first federal agency to use artificial intelligence to draft regulations. Gregory Zerzan, the department's chief counsel, isn't concerned about AI's well-documented tendency to hallucinate false information. The point isn't perfection—it's speed.
When AI Hallucinations Meet Safety Rules
Here's the problem: AI doesn't just make mistakes—it makes confident mistakes. Large language models like Google Gemini can fabricate court cases that never existed, cite statistics from thin air, and present fiction as fact with unwavering certainty.
DOT staffers are worried, and rightfully so. If AI errors slip through into actual regulations, the consequences could extend far beyond paperwork problems. Flawed airplane safety standards could lead to crashes. Incorrect car safety requirements could fail to protect passengers. Faulty pipeline regulations could result in environmental disasters.
The stakes couldn't be higher. Transportation regulations literally govern life-and-death decisions for millions of Americans every day. Is 30 minutes of AI drafting really sufficient for rules that protect human lives?
The Speed vs. Safety Dilemma
Zerzan's logic isn't entirely unreasonable. Federal rulemaking is notoriously slow, often taking years to address emerging technologies or safety concerns. In a rapidly evolving world, regulatory agility matters. If AI can accelerate the process while maintaining quality, it could be transformative.
But there's a fundamental tension here. Good regulations require extensive stakeholder input, expert review, and careful consideration of unintended consequences. These processes take time for good reason—they help prevent regulatory failures that could cost lives.
Other federal agencies are watching this experiment closely. If DOT succeeds, expect the FDA, EPA, and others to follow suit. If it fails, the fallout could set back AI adoption in government for years.
The Broader Implications
This isn't just about transportation policy—it's about how we govern in the age of AI. Should algorithms draft the rules that govern our lives? What happens when the tools meant to protect us are created by systems we don't fully understand or control?
The irony is striking: we're using AI to regulate industries where AI itself is becoming increasingly prevalent. Self-driving cars, AI-powered air traffic control, smart infrastructure—all governed by rules potentially written by the same technology they're meant to oversee.
What safeguards would you want in place if AI were drafting the safety rules that protect your daily commute?
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Share your thoughts on this article
Sign in to join the conversation
Related Articles
eBay is banning unauthorized AI shopping agents and LLM bots starting February 20, 2026, marking a major clash in the age of agentic commerce.
An exclusive analysis of Trump's plan to block state AI laws. This is a strategic move to centralize power and give Big Tech control over AI's future.
The 'U.S. Tech Force' isn't just a jobs program. It's a strategic move in the US-China AI war, conscripting Big Tech as national infrastructure. Here's what it really means.
The new chair of America's vaccine advisory panel shocked medical experts by rejecting evidence-based science. What does this mean for public health policy?
Thoughts