Qwen3 is the newest family of open-weight LLMs (0.6B to 235B MoE) from Alibaba. Features switchable "Thinking Mode" for reasoning vs. speed. Strong performance on code/math. Multilingual.
Qwen3 is here! It's the latest family of open-weight large language models just released by the Alibaba Qwen team. This is a significant drop, including six dense models (0.6B to 32B) and two MoE models (30B & 235B).
A really interesting feature across these models is the Hybrid Thinking Mode. You can choose – let the model respond quickly, or activate a deeper, step-by-step reasoning process before it answers, giving you flexibility between speed and thoroughness.
Performance looks very competitive. The flagship 235B MoE is positioned against top models like DeepSeek-R1 and o3-mini, while even the smaller dense models show strong results, with the 4B apparently rivaling their previous 72B Instruct model.
They've focused on improving coding, math, and agent capabilities across the board.
You can try them directly in Qwen Chat (web and app) or run them locally via tools like Ollama.