Liabooks Home|PRISM News
AI Agents Want Your Data: The High Price of 'Free' Automation From Google and OpenAI
TechAI 분석

AI Agents Want Your Data: The High Price of 'Free' Automation From Google and OpenAI

Source

AI agents from Google, OpenAI, and others promise convenience but demand unprecedented access to your emails, files, and more. We analyze the profound threat this poses to data privacy and cybersecurity.

The next wave of AI doesn't just want your questions—it wants the keys to your digital life. To power the next generation of autonomous 'AI agents,' tech giants like Google and OpenAI are asking for unprecedented access to your personal emails, files, and messages, sparking a new war over privacy and security. After spending the last two years getting comfortable with chatbots, it's time to consider what you'll have to give up in return.

AI Agents: The Other Side of Convenience

While there's no strict definition, an AI agent is best understood as a generative AI system that's been given a degree of autonomy. They can book flights, conduct research, and add items to shopping carts on your behalf, sometimes completing tasks with dozens of steps. The catch? To do their job, they need deep access to your personal data.

Harry Farmer, a senior researcher at the Ada Lovelace Institute, warns that for full functionality, agents "often need to access the operating system or the OS level," which could pose a "profound threat" to cybersecurity and privacy. A prime example is Microsoft's controversial Recall feature, which takes screenshots of a user's desktop every few seconds.

A History of Mistrust and Data Misuse

The AI industry doesn't have a strong track record of respecting data rights. After machine learning breakthroughs in the early 2010s showed that more data yielded better results, the race to hoard information intensified. Facial recognition firm Clearview scraped millions of photos from the web, while Google paid people just $5 for facial scans. Today's Large Language Models (LLMs) were similarly built by scraping the web and millions of books, often without permission or payment.

These companies are very promiscuous with data. They have shown to not be very respectful of privacy.

Carissa Véliz, Associate Professor at the University of Oxford

An "Existential Threat" to Application Privacy

The risks posed by AI agents go beyond mere data collection. One study commissioned by European data regulators highlighted numerous privacy risks, including the potential for sensitive data to be leaked, misused, or intercepted. A key problem is that even if a user consents, third parties in their emails and contact lists have not.

So-called 'prompt-injection attacks,' where malicious instructions are fed to an LLM, could lead to data leaks. Meredith Whittaker, president of the foundation behind the encrypted messaging app Signal, told WIRED that agents with OS-level access pose an "existential threat" to application-level privacy like that offered by Signal.

본 콘텐츠는 AI가 원문 기사를 기반으로 요약 및 분석한 것입니다. 정확성을 위해 노력하지만 오류가 있을 수 있으며, 원문 확인을 권장합니다.

OpenAIGooglecybersecurityAI agentsdata privacypersonal data

관련 기사