Liabooks Home|PRISM News
Half of Workers Use AI Inappropriately—And Companies Are Clueless
EconomyAI Analysis

Half of Workers Use AI Inappropriately—And Companies Are Clueless

3 min readSource

47% of professionals use AI inappropriately at work while 63% witness colleagues doing the same. Shadow AI usage creates massive risks companies can't detect or manage.

Your colleague just delivered a flawless presentation. The analysis was sharp, the data compelling, the recommendations spot-on. There's just one problem: they can't answer a single follow-up question because they didn't actually write it.

Welcome to the age of "shadow AI," where 47% of professionals admit to using AI inappropriately at work, and 63% have witnessed colleagues doing the same, according to a new University of Melbourne and KPMG study.

The Invisible Productivity Illusion

The real shift isn't that employees suddenly became dishonest—it's that AI makes shortcuts fast, easy, and invisible. "Before AI, hiding poor work was harder," explains Zahra Timsah, CEO of i-GENTIC AI. "Now an employee can generate a polished report in minutes, and managers assume competence."

The numbers paint a sobering picture:

  • 44% of US workers use unauthorized AI tools
  • 46% upload sensitive company information to public AI platforms
  • 57% make mistakes due to unchecked AI use
  • 53% completely conceal their AI usage
  • 64% put less effort into work because they can lean on AI

This week's KPMG Australia scandal perfectly illustrates the irony: 28 employees were caught using AI to cheat on internal exams, including a partner who paid a $10,000 fine for cheating on an AI ethics exam.

Beyond Cheating: The Real Corporate Threat

"It isn't just that people are passing off AI as their own work; they're also poisoning the corporate well by relying on AI slop," warns Nick Misner, COO of cybersecurity platform Cybrary. "While AI accelerates coding speed, it's introducing more debt and security vulnerabilities."

The threat runs deeper than academic dishonesty. Companies are making strategic decisions based on work nobody truly understands. When an employee presents AI-generated analysis they can't defend, organizations lose what Timsah calls "internal intelligence"—the collective thinking capacity that drives real innovation.

Consider the cascade effect: Gallup reports 79% of the global workforce sits somewhere between "doing the minimum" and "actively disengaged." Hand disengaged workers a powerful tool with no guidance, and they won't use it to become more productive—they'll use it to do the same work with less effort.

The Governance Gap

"We're seeing AI adoption massively outpace governance," Misner notes. This isn't just a technology problem—it's a systemic failure of organizational readiness. Companies that can't detect shadow AI usage face massive risk exposure, from data leakage to compliance violations to skill erosion.

The solution isn't restriction—it's clarity. Joe Schaeppi of AI engagement company Solsten draws parallels to earlier workplace technology adoption: "We saw similar patterns when the internet and search engines first entered the workplace. Whenever a powerful new tool appears, misuse is inevitable."

Building Better Boundaries

Smart companies are getting ahead of the curve with clear, practical policies. Timsah's team implemented a simple rule: employees can use approved AI tools, but cannot input confidential, client, financial, or proprietary information into public AI systems.

"We focused on clarity, not restriction," she explains. "Using AI to rewrite a generic email is fine. Uploading client contracts or presenting AI-generated analysis you don't understand as your own work is not."

The key is human supervision at critical checkpoints. Companies should require employees to explain their reasoning and demonstrate understanding—AI can generate answers, but it cannot replace ownership or accountability.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles