The $480M Bet on AI That Actually Plays Well With Others
Humans& raises massive seed round to build AI models designed for collaboration, not just conversation. Can they crack the coordination challenge that's stumping Big Tech?
$480 million for a three-month-old startup with no product. In the AI gold rush of 2026, that's not just ambitious—it's a statement about where the real money thinks AI is heading next.
Humans&, founded by veterans from Anthropic, Meta, OpenAI, xAI, and Google DeepMind, just closed one of the largest seed rounds in history with a bold premise: current AI models are brilliant at answering questions but terrible at the messier work of human collaboration. While everyone else builds better chatbots, they're building what they call a "central nervous system" for the human-plus-AI economy.
The timing couldn't be more critical. As companies transition from chat interfaces to AI agents, a glaring gap has emerged—not in what AI can do, but in how it coordinates with teams of humans who have competing priorities, long-running decisions, and the need to stay aligned over time.
The Coordination Problem Nobody's Solving
Current AI models excel at individual tasks. Ask ChatGPT to write code, and it delivers. Request a document summary from Claude, and you'll get one. But ask any AI to help a team of ten people decide on a company logo—complete with different aesthetic preferences, budget constraints, and approval processes—and you'll quickly discover the limits of today's "helpful assistant" paradigm.
"It feels like we're ending the first paradigm of scaling, where question-answering models were trained to be very smart at particular verticals," explains Andi Peng, co-founder and former Anthropic employee. "Now we're entering what we believe to be the second wave of adoption where the average consumer or user is trying to figure out what to do with all these things."
This isn't just about making AI more social. It's about fundamentally rethinking how foundation models are trained. Instead of optimizing for individual user satisfaction and answer accuracy, Humans& is training models using long-horizon and multi-agent reinforcement learning—techniques designed to help AI systems plan, act, and coordinate across multiple people and extended timeframes.
The approach represents a significant departure from current AI development. While OpenAI and Anthropic focus on making their models smarter, Humans& is making them more socially intelligent.
Beyond the Chatbot Ceiling
CEO Eric Zelikman, formerly of xAI, describes their vision as replacing multiplayer contexts like Slack or Google Docs with something that understands not just what people are saying, but why they're saying it and how it fits into broader team dynamics.
"We are building a product and a model that is centered on communication and collaboration," Zelikman told TechCrunch. The goal is creating AI that asks questions "in a way that feels like interacting with a friend or a colleague, someone who is trying to get to know you."
Current chatbots ask questions constantly, but without understanding the strategic value of those questions. They've been optimized for immediate user satisfaction rather than long-term relationship building or group coordination.
Co-founder Yuchen He, a former OpenAI researcher, emphasizes the memory component: "The model needs to remember things about itself, about you, and the better its memory, the better its user understanding."
This memory isn't just about conversation history—it's about understanding individual skills, motivations, and needs within the context of group dynamics. The vision is AI that acts as "connective tissue" across organizations, whether that's a 10,000-person business or a family planning vacation.
The Validation Wave
Humans& isn't operating in a vacuum. The coordination challenge they're addressing is gaining recognition across the industry. LinkedIn founder Reid Hoffman recently argued that companies are implementing AI wrong by treating it like isolated pilots, when the real leverage lies in the coordination layer of work.
"AI lives at the workflow level, and the people closest to the work know where the friction actually is," Hoffman wrote. "They're the ones who will discover what should be automated, compressed, or totally redesigned."
The market is responding accordingly. AI collaboration tools are seeing significant investment—Granola, an AI note-taking app focused on collaborative features, recently raised $43 million at a $250 million valuation. But these are largely applications built on top of existing models, not new model architectures designed for collaboration from the ground up.
The David vs. Goliath Reality
The challenge facing Humans& isn't just technical—it's competitive. They're not just going up against collaboration tools like Notion and Slack. They're taking on the biggest names in AI, all of whom are actively working on collaboration features.
Anthropic has Claude Cowork for work-style collaboration. Google'sGemini is embedded in Workspace for AI-enabled collaboration within existing tools. OpenAI is pitching developers on multi-agent orchestration. Each of these companies has massive resources, established user bases, and years of head start in model development.
The advantage Humans& claims is focus. While the tech giants are adding collaboration features to existing models, Humans& is building collaboration into the foundation model architecture itself. It's the difference between bolting social features onto a car versus designing a vehicle for group travel from the start.
But focus comes with risk. Training and scaling new foundation models requires enormous capital—the kind that makes even a $480 million seed round look like a down payment. Humans& will be competing with established players for the same scarce compute resources, top-tier talent, and enterprise customers.
The Acquisition Elephant
Perhaps the biggest risk isn't competition—it's acquisition. With companies like Meta, OpenAI, and DeepMind actively hunting for top AI talent, a startup with this much pedigree and funding becomes an obvious target.
Humans& claims they've already turned away interested parties and aren't looking to sell. "We believe this is going to be a generational company," Zelikman insists. But in an industry where talent acquisition often drives M&A more than product strategy, that resolve will be tested.
The question isn't whether Big Tech will try to acquire them—it's whether Humans& can build something defensible enough to remain independent while they're still figuring out what their product actually is.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Share your thoughts on this article
Sign in to join the conversation
Related Articles
Apple will unveil a Gemini-powered Siri in February, finally delivering on AI promises made in 2024. But what does this partnership really mean?
At Davos 2026, AI leaders traded public barbs over China, bubble concerns, and market dominance. From Anthropic vs Nvidia tensions to Microsoft's usage warnings, the gloves came off in Silicon Valley's biggest power play.
Kalshi's $2,000 AI-generated commercial reveals how artificial intelligence is disrupting traditional advertising. What does this mean for creativity, jobs, and the future of marketing?
Google launched Personal Intelligence for Gemini, allowing AI to access Gmail, Calendar, and Photos automatically. The convenience vs privacy debate enters a new phase.
Thoughts