Microsoft Copilot Police Report Hallucination Leads to Wrongful Football Fan Ban
West Midlands Police admitted a Microsoft Copilot hallucination led to a non-existent match being cited in an intelligence report, resulting in fan bans. Read about the fallout.
A non-existent football match just caused a major real-world headache for British law enforcement. The West Midlands Police, one of Britain's largest forces, admitted that its intelligence report was compromised by a Microsoft Copilot AI hallucination.
The Microsoft Copilot Police Report Hallucination
According to The Verge, the error surfaced in an intelligence report that led to Israeli football fans being banned from a match in 2025. The report included details about a fixture between West Ham and Maccabi Tel Aviv—a game that simply never took place.
Chief Constable Craig Guildford officially admitted the mistake, stating that the erroneous result arose from the use of Microsoft's AI assistant. The police included the hallucinated data in their formal documents without proper fact-checking, highlighting a critical lapse in procedural oversight.
Dangers of Unchecked AI in Law Enforcement
While Microsoft has frequently warned users that Copilot can generate inaccuracies, this incident marks a rare case where AI-generated misinformation directly impacted public policy and individual rights. It raises urgent questions about the ethics of deploying generative AI in sectors where accuracy is paramount for justice.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
The US defense budget request for FY2027 includes $53.6 billion for drone and autonomous warfare—more than most nations spend on their entire military. What does this mean for global security and the future of war?
After two months of bitter conflict, Anthropic and the Trump administration may be thawing—thanks to a new cybersecurity AI model. What does it mean when principle meets political pressure?
OpenAI has shelved its erotic ChatGPT feature indefinitely. The real story isn't about adult content—it's about who gets to decide what AI will and won't do.
The US Pentagon has revealed plans to use generative AI—potentially ChatGPT and Grok—to rank and prioritize military targets. What changes when algorithms enter the kill chain?
Thoughts
Share your thoughts on this article
Sign in to join the conversation