Product Thumbnail

Selene 1

Evaluate your AI app with the most accurate LLM Judge

API
Developer Tools
Artificial Intelligence

Selene 1 is an LLM-as-a-Judge that evaluates AI responses with human-like precision. Get eval scores and actionable feedback via our API to boost your AI's reliability. Measure what matters to you by building custom evals in our Alignment Platform.

Top comment

Hey Product Hunt! Maurice here, CEO and co-founder of Atla. 


At Atla, we’re a team of researchers and engineers dedicated to training models and building tools that monitor AI performance. 


If you’re building with AI, you know that good evals are critical to ensuring your AI apps perform as intended.

Turns out, getting accurate evals that assess what matters for your use case is challenging. Human evaluations don’t scale and general-purpose LLMs are inconsistent evaluators. We’ve also heard that default eval metrics aren’t precise enough for most use cases, and prompt engineering custom evals from scratch is a lot of work. 

🌖 Our solution

  • Selene 1: a LLM Judge trained specifically for evals. Selene outperforms all frontier models (OpenAI’s o-series, Claude 3.5 Sonnet, DeepSeek R1, etc.) across 11 benchmarks for scoring, classifying, and pairwise comparisons.

  • Alignment Platform: a tool that helps users automatically generate, test, and refine custom evaluation metrics with just a description of their task, little-to-no prompt engineering required.


🛠️ Who is it for?
Builders of GenAI apps who need accurate and customizable evals—whether you’re fine-tuning LLMs, comparing outputs, or monitoring performance in production. Evaluate your GenAI products with Selene and ship with confidence.

You can start with our API for free. Our Alignment Platform is available for all users.

We’d love your feedback in the comments! What challenges have you faced with evals?

Comment highlights

Love the idea of an AI judge that outperforms leading frontier models. Consistency in AI evals has been a major gap, and this seems like a much-needed solution.

🚀 Congrats on launching Selene 1 on Product Hunt! 🎉 This looks like a game-changer for anyone building with AI—finally, an evaluation tool that’s both accurate and scalable.

I love how Selene 1 tackles the inconsistency of general-purpose LLMs as evaluators. The fact that it outperforms models like GPT-4o and Claude 3.5 Sonnet across multiple benchmarks is super impressive! 👏

One question: Since evaluations can be very domain-specific, have you considered allowing users to fine-tune Selene 1 itself for their niche use cases? That could add another layer of customization for teams working in highly specialized fields.

Excited to see how this evolves! 🚀

A new idea i must say, conclusions with more human touch instead of really default replies, all the best for the launch.

Reliable AI evaluation is a huge challenge, and Selene 1 looks like a major step forward in making AI performance more measurable and scalable.

Too many AI tools, outputs, and constant tweaks can be a lot especially when you're racing to launch. Having precise evaluations without handcrafting endless prompts sounds like a dream. I like the idea of freeing myself up to actually focus on strategy instead of chasing down inconsistencies. Super intrigued!

Keeping AI performance in check is no small task, and having an evaluator specifically trained for this sounds like a game-changer! How does Selene handle nuanced tasks where context is key—does it adapt based on different use cases?

Hey Product Hunt, Kyle here from the Atla team.


We created Selene 1 + the Alignment Platform to unlock the ability for AI dev teams to quickly and accurately make informed decisions about system changes; such as updating your base model, your retriever, or your prompts. Your applications are designed to serve real users, and effective evals should represent their preferences.


For those that want to dive straight into the code, we've set up tutorial notebooks that represent the most popular use cases we've seen. These run directly on public datasets with human annotations for demonstration, but feel free to switch out for different data:


You can also find our full docs here. Happy building!

Love the thought and effort behind this. Hope it finds its audience and makes an impact!