-

Tavus Raises $40M to Build the Next Frontier of Intelligence: Human Computing

Tavus is bringing sci-fi to life with PALs and the models that power them—emotionally intelligent AI humans that can see, hear, act, and even look like us.

SAN FRANCISCO--(BUSINESS WIRE)--Today, Tavus announced $40 million in Series B funding to build the future of human computing, led by CRV with participation from Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital. This vision takes shape with the launch of PALs: AI humans built by Tavus with emotional intelligence, agentic capabilities, and true multimodality with text, voice, and face-to-face.

With PALs, we're finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. —Hassaan Raza, CEO of Tavus

Share

Human-computer interfaces haven't fundamentally evolved since the 1980s. We moved from command-line interfaces to graphical user interfaces—from typing commands to clicking buttons. Today's AI chatbots feel like a return to the command-line era: text-based interfaces where humans must spell out every action and instruction. For decades, science fiction promised us something better—Star Trek, Her—computers that could see and hear us, but also look like us, respond with emotion, and feel alive. Tavus is fulfilling this promise by creating AI that makes conversations with computers feel like second nature, just like talking to a friend.

“We've spent decades forcing humans to learn to speak the language of machines,” said Hassaan Raza, CEO of Tavus. “With PALs, we're finally teaching machines to think like humans—to see, hear, respond, and look like we do. To understand emotion, context, and all the messy, beautiful stuff that makes us who we are. It's not about more intelligent AI, it’s about AI that actually meets you where you are.”

Meet the PALs

Tavus launched PALs (Personal Affective Links): Agentic AI humans that see, hear, evolve, remember, and act, just like humans do. Powered by foundational models for rendering, conversational intelligence, and perception, PALs represent the next era of human computing.

PALs are built to communicate the way people do. They maintain a lifelike visual presence, read expressions and gestures, and understand emotion and timing in real time. They remember context, pick up on subtle social cues, and move fluidly between video, voice, and text, so interaction always feels natural. And like humans, they have agency—taking initiative, reaching out, and acting on your behalf to manage calendars, send emails, and follow through without supervision.

For years, computers made us speak their language. PALs finally speak ours, forming genuine connections by learning individual habits, adapting to personality, and improving with every interaction.

The Models Powering PALs

Behind every PAL is a suite of foundational models that teach machines to see, feel, and act the way people do. These proprietary, state-of-the-art systems were built entirely in-house by the Tavus research team to understand and simulate human behavior with unprecedented depth. Each model sets a new standard for realism and intelligence, expanding the boundary of what “human-like” AI can become.

  • Phoenix-4 — A SoTA rendering model that drives lifelike expression, headpose control, and emotion generation at conversational latency.
  • Sparrow-1 — An audio understanding model that uses deep conversational intelligence and audio and semantic-based emotional understanding to manage timing, tone, and intent to adapt in real time to know not just what to say, but when.
  • Raven-1 — A contextual perception model that interprets context, people, environments, emotions, expressions, and gestures, giving PALs a sense of presence and enabling them to see and understand like humans do.

These, paired with a SoTA orchestration and memory management system, bring face-to-face video, speech, text, and agentic capabilities to life, enabling the world’s first AI humans. What makes them powerful isn’t just how they look or talk; it’s that they understand, remember, and act, just as a human would. This is the beginning of computers that finally feel alive.

Get started for free at https://www.tavus.io/

About Tavus

Tavus is a San Francisco-based AI research lab pioneering human computing: the art of teaching machines to be human. Backed by CRV, Scale Venture Partners, Sequoia Capital, Y Combinator, HubSpot Ventures, and Flex Capital, Tavus builds foundational models that teach machines to see, hear, respond, and act like people do, pioneering AI humans. The company’s research team brings experience from leading universities and top AI labs, led by researchers specializing in rendering, perception, and affective computing, including Professor Ioannis Patras and Dr. Maja Pantic. Over one hundred thousand developers and enterprises use Tavus to deploy AI for recruiting, sales, education, and customer service.

Contacts

Leigh Disher
GMK Communications for Tavus
leigh@gmkcommunications.com

More News From Tavus

Tavus Announces AI Santa 2.0: The World’s First Emotionally Intelligent Holiday PAL

SAN FRANCISCO--(BUSINESS WIRE)--Tavus, the leading human computing company building lifelike AI humans that can see, hear, respond, and take actions, today announced AI Santa 2.0, the most advanced and emotionally intelligent version of Santa ever created. Built as an official Tavus PAL, AI Santa brings the magic of the North Pole to life with human-level presence, memory, and multimodal communication across video, voice, and text. Last year, millions of people spoke with the original AI Santa...

AI Research Company Tavus Debuts Hummingbird-0, Ushering in a New Era of Zero-Shot Lip Sync

SAN FRANCISCO--(BUSINESS WIRE)--Tavus, a leading AI video research company backed by Sequoia, today announced the release of Hummingbird-0 into research preview, a zero-shot lip sync model created from components of its flagship Phoenix-3 replica model. Now, with just one video and any voice track, developers can bring faces to life—instantly—without model training or manual tweaking. This step up in quality opens the door to high-quality user-generated content, foreign language dubbing for loc...

Tavus Introduces Phoenix-3, Raven-0, and Sparrow-0: A Family of Models Powering the First AI Agents That Truly See, Hear, and Engage in Real-Time, Face-to-Face Interaction

SAN FRANCISCO--(BUSINESS WIRE)--Tavus, a leading AI research company backed by Sequoia, announced today the launch of three groundbreaking AI models that set new industry benchmarks for human-AI interaction. With Phoenix-3 (the first full-face AI rendering model), Raven-0 (the first AI perception model that sees and reasons like a human), and Sparrow-0 (a state-of-the-art conversational turn-taking model), Tavus redefines AI-human interaction by delivering the building blocks for hyper-realisti...
Back to Newsroom