Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

Aya Vision

Multilingual, Multimodal AI from Cohere

Aya Vision, from Cohere For AI, is the open-weights, multilingual, multimodal models (8B & 32B). Outperforms larger models on multilingual vision tasks. Available on Hugging Face and Kaggle.

Top comment

Hi everyone!

Check out Aya Vision, a new set of open-weights models from Cohere For AI, and this is a significant step towards making AI truly global! Most vision-language models are heavily biased towards English. Aya Vision tackles this head-on by supporting 23 languages spoken by over half the world's population.

Here's why it's important:

🌍 Multilingual by Design: Excels at understanding and generating text and processing images/videos across a wide range of languages.
🖼️ Multimodal: Handles both images/videos and text.
🚀 Outperforms Larger Models: Cohere claims Aya Vision (8B and 32B versions) outperforms models many times their size (like Llama 3 90B!) on multilingual multimodal tasks.
🔓 Open Weights: Available on Hugging Face and Kaggle.
📱 Free on WhatsApp: You can even try Aya for free on WhatsApp!

They're also releasing a new benchmark, Aya Vision Benchmark, specifically for evaluating multilingual multimodal performance. The goal is to build AI that understands the nuances of different cultures and languages, not just add more languages.