Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

HunyuanVideo-Avatar

Dynamic, multi-character AI animation driven by audio

HunyuanVideo-Avatar by Tencent creates dynamic, emotion-controllable, multi-character talking avatar videos from audio. Open source, ensures character consistency. Code & models released.

Top comment

Hi everyone! The Tencent Hunyuan team has open-sourced HunyuanVideo-Avatar today, and it does something pretty magical with just an image and an audio track. You basically give it a picture of a character and some audio, and the model works to understand the context, like the character's surroundings from the image and the emotion from the audio. Then, it animates the character to speak or sing naturally, creating a video with expressions, lip-sync, and even body movements. For example, if you give it an image of someone holding a guitar on a beach and some mellow music, it can generate a video of them playing and singing in that setting. The ability to produce such context-aware, audio-driven animation like this, especially with the full system open-sourced, is a really neat step for creating dynamic avatars.