Phi-4-reasoning-vision-15B is a compact open-weight multimodal model built on a mid-fusion architecture. Balancing fast direct perception with deep chain-of-thought, building capable computer-use agents and solving complex math is now highly efficient.
Phi-4-Reasoning-Vision-15B is Microsoft"s new 15B open-weight model that makes multimodal reasoning feel much more efficient.
It was trained on 200B multimodal tokens, handles high-res screens well, and stays direct on simpler tasks while switching into deeper reasoning when needed.
Looks especially strong for math, science, and computer-use agents. Weights on HF.
The GUI agent angle is what makes this really compelling. A 15B model that can handle high-res screens well enough for computer-use tasks is a big deal for anyone building browser automation or testing tools. The adaptive reasoning depth -- going direct on simple perception but switching to chain-of-thought for harder problems -- seems like the right tradeoff for latency-sensitive agent loops. Have you seen benchmarks on how it compares to larger models specifically on GUI grounding tasks?
Hi everyone!
Phi-4-Reasoning-Vision-15B is Microsoft"s new 15B open-weight model that makes multimodal reasoning feel much more efficient.
It was trained on 200B multimodal tokens, handles high-res screens well, and stays direct on simpler tasks while switching into deeper reasoning when needed.
Looks especially strong for math, science, and computer-use agents. Weights on HF.