Liabooks Home|PRISM News
Your Meeting Notes Are Less Private Than You Think
TechAI Analysis

Your Meeting Notes Are Less Private Than You Think

4 min readSource

Granola's AI meeting app claims notes are "private by default," but anyone with a link can view them—and your data trains their AI unless you opt out. Here's what that means.

"Private by default" — four words that might be costing you more than you realize.

What Granola Actually Does

Granola markets itself as an "AI notepad for people in back-to-back meetings." The pitch is clean: it syncs with your calendar, quietly captures audio from your meetings, and uses AI to generate a bulleted summary of everything discussed. You can edit the notes, bring in collaborators, and query the AI assistant afterward — think of it as a memory prosthetic for your workday.

The privacy promise sounds equally clean. Granola tells users their notes are "private by default." But according to reporting by The Verge, the reality is more complicated. Any note generated in Granola can be viewed by anyone who has the link — no authentication required. And unless users actively hunt down and toggle an opt-out setting, those notes are used to train Granola's internal AI models.

Neither of these facts is buried in a terms-of-service footnote. But neither are they prominently surfaced during onboarding. The result is a gap between what users reasonably believe "private" means and what the product actually delivers.

Why This Matters Beyond One App

This isn't a story about Granola alone. The AI meeting-capture market has expanded rapidly — Otter.ai, Fireflies.ai, Microsoft Teams transcription, Zoom AI Companion, and a growing list of competitors all operate in roughly the same space. By 2025, automated meeting transcription has moved from a novelty to a default feature in many enterprise workflows.

PRISM

Advertise with Us

[email protected]

The sensitivity of what these tools capture is the crux of the issue. Meeting notes aren't just task lists. They contain unannounced product roadmaps, personnel decisions, contract negotiations, competitive intelligence, and sometimes personal disclosures. When that data flows through a system designed with link-based sharing and opt-out AI training, the exposure isn't theoretical — it's structural.

For enterprise users, the risk compounds quickly. A link shared in a Slack channel, forwarded in an email, or accidentally posted in a public forum doesn't trigger a warning. The note is simply... accessible.

Three Ways to Read This

For individual users, the problem is one of assumed defaults. Seeing "private by default" and not checking further is a rational response to how we've been trained to interpret that phrase. Opt-out architectures work precisely because most people won't go looking for a setting they don't know exists.

For enterprise security teams, this surfaces a broader question that predates Granola: do you actually know which AI tools your employees are using, and what data those tools are sending where? Shadow IT was already a headache. AI-powered shadow IT — where the tool is actively listening in meetings — is a different category of exposure.

For the company itself, there's a reasonable counterargument. Link-based sharing is the architecture behind Notion, Google Docs, and most modern collaboration software. The design choice isn't unusual. What makes it feel different here is the nature of the underlying data: audio captured automatically from private conversations, not documents a user consciously created and chose to share.

The Regulatory Angle

In the EU, GDPR would require explicit consent for using personal data in AI training — an opt-in, not an opt-out. In the US, there's no equivalent federal standard, though state-level laws in California (CCPA) and others are beginning to close that gap. For companies operating across jurisdictions, the compliance picture for AI meeting tools is genuinely murky.

Regulators have been slow to catch up with the pace of AI tool adoption in workplaces. That lag creates a window where the burden of protection falls almost entirely on the user — who, in most cases, isn't reading the fine print.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]