Lemon Slice Raises $10.5M to Kill the ‘Uncanny Valley’ with Single-Image Video Avatars
AI avatar startup Lemon Slice has raised $10.5 million in seed funding to create realistic, interactive video avatars from a single image, aiming to solve the 'uncanny valley' problem with its custom diffusion model.
Beyond Text: The Push for Interactive Video AI
Digital avatar startup Lemon Slice has raised a $10.5 million seed round to add a new layer of interaction to AI agents: live-streamed video. The company announced Tuesday that it has developed a diffusion model that can create expressive digital avatars from a single image, aiming to solve the “creepy” and “stiff” problem that plagues existing avatar technology.
The funding was led by Matrix Partners and Y Combinator, with participation from notable angels like Dropbox CTO Arash Ferdowsi, Twitch CEO Emmett Shear, and the music duo The Chainsmokers. Founded in 2024, Lemon Slice is betting that overcoming the uncanny valley is the key to unlocking the true potential of AI agents in everything from customer service to mental health support.
“The existing avatar solutions I’ve seen to date add negative value to the product,” co-founder Lina Colucci said in a statement. “They are creepy, and they are stiff. As soon as you start interacting with them, it feels very uncanny... The thing that has prevented avatars from really taking off is that they haven’t been good enough.”
A General-Purpose Model as the Differentiator
At the core of the startup's technology is 'Lemon Slice-2', a 20-billion-parameterdiffusion model. According to the company, it's capable of live-streaming video at 20 frames per second on a single GPU. The model is available via an API and an embeddable widget, allowing companies to integrate it with a single line of code. For voice, Lemon Slice uses technology from ElevenLabs.
The company faces a crowded market with competitors like D-ID, HeyGen, and Synthesia, but investors believe its foundational model gives it an edge. Y-Combinator partner Jared Friedman argued that because Lemon Slice trains a general-purpose video diffusion transformer, similar to Google's Veo3 or OpenAI's Sora, "it has no ceiling on how good it can get; the others top out below photorealistic."
Funding the Future of Interaction
The eight-person startup plans to use the new capital to hire engineering and go-to-market staff and to cover the significant compute costs required for training its models. The company states it has guardrails to prevent unauthorized face or voice cloning and uses LLMs for content moderation.
While Lemon Slice declined to name specific clients, it said its model is already being used in education, e-commerce, and corporate training. The goal is clear: to become the go-to platform for any application that needs a believable, interactive face.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Explore the Startup Distribution Strategy GTMfund 2026. Paul Irving explains why distribution, not product, is the final moat for startups in the AI era.
Databricks unveils Instructed Retriever, boosting RAG performance by 70%. Learn how this new architecture solves metadata reasoning for enterprise AI agents.
Silicon Valley giants are battling to dominate the 2026 AI operating systems race. Explore how AI agents are threatening the app-based business model and why OpenAI's Jerry Tworek is departing.
MiroMind's new MiroThinker 1.5 delivers trillion-parameter reasoning performance with just 30B parameters. Explore its Scientist Mode, $0.07 inference cost, and open-weight MIT license.