Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

sync-3

Studio-grade AI lip sync and visual dubbing

sync-3 is a 16B parameter AI lip sync model that doesn't just move lips, it understands performances. Built on a global understanding of a person across an entire shot, it generates all frames at once instead of stitching isolated snippets. It handles what breaks every other model: close-ups, occlusions, extreme angles, low lighting - all while preserving the emotion of the original performance across 95+ languages in full 4K. Try it out at sync.so, via API, or in Adobe Premiere.

Top comment

Hey Product Hunt! Kalyan here, head of content and marketing at sync.

We've been building AI lipsync for a while now, and today we're launching sync-3, our most advanced model release ever.

Here's the short version: previous lipsync models (including our own) processed video in small, isolated chunks and stitched them together. sync-3 takes a fundamentally different approach. It builds a global understanding of a person across an entire shot and generates all frames at once. The result is consistency and realism that closes the gap between real footage and dubbed footage.

A few things sync-3 handles that nothing else does well:

- Close-ups and partial faces (the full face doesn't need to be visible)
- Extreme angles including side profiles, over-the-shoulder, non-frontal
- Obstructions like hands, mics, scarves - detected and handled automatically
- Speaker style and emotion are preserved, not flattened

- Low lighting and varied lighting scenarios

It's 32x larger than our previous model (16B vs 400M parameters), supports 95+ languages, and outputs in 4K.

You can use it right now at sync.so, through our Adobe Premiere plugin, or via API.

We think of this as the leap from perfecting lip sync to unlocking facial reanimation, the model doesn't just match mouths, it understands performances.

Would love for you to try it and let us know what you think. We're here all day answering questions.