Signal Founder Challenges OpenAI with Confer AI Privacy Assistant
Signal co-founder Moxie Marlinspike launches Confer AI privacy assistant, featuring E2E encryption and TEE tech to ensure conversations remain private.
Imagine if your therapist was being paid to convince you to buy a product. According to Moxie Marlinspike, co-founder of Signal, that's the current state of ad-driven AI models. To counter this, he launched Confer in December 2025, a new Confer AI privacy assistant designed to prioritize user anonymity over data harvesting.
The Tech Behind Confer AI Privacy Assistant
Confer doesn't just promise privacy; it enforces it through a complex backend architecture. Unlike ChatGPT or Claude, which retain personal information for model training, Confer's host never has access to your conversations. The service employs several layers of protection to ensure this zero-knowledge approach.
- Trusted Execution Environment (TEE): All inference is processed in a secure enclave, preventing unauthorized access even at the server level.
- End-to-End Encryption: Using the WebAuthn passkey system, messages are encrypted before they ever leave the user's device.
- Remote Attestation: A verification system that ensures the hardware hasn't been compromised before processing queries.
Comparing Confer AI and ChatGPT Plus Pricing
Privacy doesn't come cheap. While Confer offers a free tier, it is limited to 20 messages a day and 5 active chats. For heavy users, the premium plan is significantly more expensive than the industry standard.
| Feature | Confer AI | ChatGPT Plus |
|---|---|---|
| Monthly Price | $35 | $20 |
| Data Privacy | E2E Encrypted / No Training | Training by Default |
| Model Type | Open-weight Foundation Models | Proprietary GPT Models |
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
North Korean hackers used ChatGPT, Cursor, and AI web tools to steal $12M in crypto in 90 days—without knowing how to code. What this means for cybersecurity's future.
Anthropic's AI cybersecurity model is reportedly available to the NSA and Commerce Department—but not to CISA, the agency responsible for defending US federal infrastructure. What that gap reveals.
After two months of bitter conflict, Anthropic and the Trump administration may be thawing—thanks to a new cybersecurity AI model. What does it mean when principle meets political pressure?
A disgruntled security researcher published working exploit code for three unpatched Windows Defender vulnerabilities. Hackers weaponized it within days. Here's what it means for everyone running Windows.
Thoughts
Share your thoughts on this article
Sign in to join the conversation