DeepSeek-VL2 are open-source vision-language models with strong multimodal understanding, powered by an efficient MoE architecture. Easily test them out with the new Hugging Face demo.
DeepSeek made waves with their R1 language model, but their multimodal capabilities (especially image understanding) are not good enough:
But, they are rapidly evolving. DeepSeek-VL2, their new open-source family of Mixture-of-Experts (MoE) vision-language models, is a big step forward, achieving strong performance with a much smaller activated parameter count, thanks to its MoE design.
And the exciting news: there is a new Hugging Face Spaces demo – you can now try these models without needing to deploy heavily (normally you would need more than 80GB of GPU resources, which is almost impossible for most of us)
So check it out and see what DeepSeek brings next to suprise everyone :)