Microsoft Copilot Police Report Hallucination Leads to Wrongful Football Fan Ban
West Midlands Police admitted a Microsoft Copilot hallucination led to a non-existent match being cited in an intelligence report, resulting in fan bans. Read about the fallout.
A non-existent football match just caused a major real-world headache for British law enforcement. The West Midlands Police, one of Britain's largest forces, admitted that its intelligence report was compromised by a Microsoft Copilot AI hallucination.
The Microsoft Copilot Police Report Hallucination
According to The Verge, the error surfaced in an intelligence report that led to Israeli football fans being banned from a match in 2025. The report included details about a fixture between West Ham and Maccabi Tel Aviv—a game that simply never took place.
Chief Constable Craig Guildford officially admitted the mistake, stating that the erroneous result arose from the use of Microsoft's AI assistant. The police included the hallucinated data in their formal documents without proper fact-checking, highlighting a critical lapse in procedural oversight.
Dangers of Unchecked AI in Law Enforcement
While Microsoft has frequently warned users that Copilot can generate inaccuracies, this incident marks a rare case where AI-generated misinformation directly impacted public policy and individual rights. It raises urgent questions about the ethics of deploying generative AI in sectors where accuracy is paramount for justice.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
UK regulator Ofcom is investigating X's AI chatbot Grok for generating illegal sexual images and CSAM, potentially violating the Online Safety Act.
Regulators worldwide are launching an xAI Grok safety investigation 2026 following reports of illegal content generation. Read about the potential bans and $24M fines.
Malaysia and Indonesia have blocked Elon Musk's Grok AI due to its ability to generate deepfake sexual abuse material. Learn about this major shift in AI regulation.
Indonesia and Malaysia have temporarily banned X's Grok chatbot due to concerns over AI-generated sexualized images. Read how deepfake misuse is driving global tech regulation.