Product Thumbnail

Qwen2.5-Omni

The end-to-end model powering multimodal chat

Open Source
Artificial Intelligence
GitHub
Audio

Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, Understands text, images, audio & video; generates text & natural streaming speech.

Top comment

Hi everyone!

You can now use Voice and Video Chat directly in Qwen Chat! Powering these new multimodal interactions is Qwen's latest open-source model: Qwen2.5-Omni.

This "omni" model is a single system that understands text, audio, images, and video, while outputting both text and natural-sounding audio.

Key aspects:

🔄 End-to-End Multimodal: A single "Thinker-Talker" architecture designed for seamless input/output across modalities.
💬 Real-Time Interaction: Built for streaming, enabling smooth voice and video chat experiences.
🗣️ Natural Speech Output: Claims strong performance in speech generation quality.
💪 Strong Across Modalities: Performs well on benchmarks for vision, audio, and text tasks.
🔓 Openly Available with Apache 2.0 license: Released on Hugging Face, ModelScope, and GitHub, with API access via DashScope.

The Qwen team believes this type of omni model is key for the future of AI agents. While this is still just the 7B version, it's impressive to see this level of multimodality in an open model.

Head over to Qwen Chat, toggle the new voice & video chat button, and experience it!

Comment highlights

No comment highlights available yet. Please check back later!