Product Thumbnail

Snowglobe

Simulate real users to test your AI before launch

Artificial Intelligence
Security

Snowglobe is a simulation environment for LLM teams to test how their applications respond to real-world user behavior. Run full workflows through realistic scenarios, catch edge cases early, and confidently improve before deploying to production.

Top comment

🫡 Hi Hunters, I’m Shreya, co-founder of Snowglobe (by Guardrails)!

If you’ve built AI agents, you know how challenging it is to test them. How do you even begin formulating a test plan for a technology whose input space is infinite?

Most teams fall back on a small ‘golden’ dataset, maybe 50 to 100 hand-picked examples.

It takes ages to put together, and even then, it only covers the happy paths, missing the messy reality of real users.

That’s how you end up with an agent that’s perfect in development, but starts hallucinating, going off-topic, or breaking policies as soon as it meets real-world scenarios.

🔮 Snowglobe fixes this problem by creating a high fidelity simulation engine for conversational AI agents!
We build realistic personas and run them through thousands of simulated conversations with your AI agent BEFORE you go to production.

Our customers are already generating tens of thousands of simulated conversations with Snowglobe, allowing them to speed up what used to take weeks of manual scenario hand-crafting and catch potential issues before your users do.

🦾 Why we built this

Before Snowglobe, I spent years building self-driving cars at Apple. Weirdly enough, he challenge with LLMs is surprisingly similar to self-driving cars: huge input space, and high stakes when something fails.

In that world, we used high fidelity simulation engines to test cars in even the most risky, and rare scenarios.

Waymo, for example, logged 20+ million miles on real roads, but over 20+ BILLION in simulation before launch.

Our goal was simple - bring that same simulation-first mindset to AI agents, starting with AI chatbots.

💅 How we’re different

→ We perform rich persona modeling to ensure a lot realism and diversity in the scenarios we generate. This is not the same as asking ChatGPT to generate synthetic data for you, which then all sounds like the same ChatGPT-voice created them.

→ We simulate full conversations, not just one-off prompts.

→ We ground scenarios in your agent’s context so they’re relevant to your use case.

→ Unlike conventional redteaming tools, we test normal-user behaviors as well as edge cases.

→ You can export scenarios straight to 🤗 Hugging Face or your favorite eval & tracing tool.

🫶 Who is this for?

If you’re building a conversational AI agent and:

→ You’re stuck testing with a tiny dataset,
→ You want to create a dataset for finetuning your LLM, or

→ You’re spending too much time creating test sets manually, or

→ You want to run QA or pentesting before launch, or

You should give Snowglobe a try.

💸 Snowglobe.so is live and ready for use!

Product Hunt fam gets $25 worth of free simulated conversation generation with the code PH25 (in addition to the $25 worth of free credits on start).

We’ve poured a lot of engineering, research and, most importantly, love into this project.

We’re excited to see how we can help you test your chatbots better! 🙌

Comment highlights

Congrats @shreya_rajpal and team!!!

I found Snowglobe really helpful for testing my AI project. The simulated user interactions felt real, and it helped me improve my product before launch. Highly recommended!

Strong positioning! Simulating real users pre-launch is a big gap in AI QA/security workflows.

The PQL would be - create first scenario → run first simulation job → review results in dashboard, right?

If you build out usage-based plans, I have a couple of suggestions if you don't mind;

• Run 1–2 shadow-billing cycles before go-live so sim job counts and minutes align with what customers expect to see on invoices.

• Add a gentle in-app meter with 50/80/100% alerts and a one-click ‘lock spend’—protects trust while encouraging healthy upgrades.

Let me know what you think!

Amazing job with the product! Looks like exactly what I would want for pen testing agents before pushing to prod :)

Wow, Snowglobe looks like a game-changer for LLM testing! I'm super excited to try it out and simulate real-world user scenarios to catch those pesky edge cases before launch. This could save so much time and boost confidence in our deployments. Can't wait to dive in!

This is an awesome idea. Let’s you check how the app might perform without the reputation risk of dealing with real people. It gives you a chance to work out the kinks before you launch.

We've been struggling with scaling our testing beyond manual scenarios. This is a day-one install for our team. Let's gooo!

Looks great — we need to test the LLM to make its output more controllable.

Congrats on the launch — Snowglobe's persona-first approach sounds promising

Really impressed by how Snowglobe makes AI agent testing so much easier and more accurate. Great launch!

That sounds quite useful for basic agent scenarios, congrats on the launch!

I guess it's not suited for complex agents running on a custom backend with multiple tools? @shreya_rajpal

Interesting. It's like custom evals for stress testing. Nice work and congrats on the launch!

This is incredible! I've worked on similar tools for DeepMind & YouTube.

How much control do users have over personas? Can I effectively write 'system prompts' to configure different personas & then have them test AI agents?
For example, I want to make my agent safe to deploy to teens – want to create a number of teen & teen-risk related personas to red team the agent. From the website/demo this doesn't seem to be configurable at the moment.

Love this launch & direction!

Congratulations on your launch. This solution is absolutely needed when developing AI applications.

Congrats on the launch! What information is needed to create the correct user personas for testing? Can you get started on relatively limited data or better with a larger user set ? 

I remember that hilarious Chevrolet chatbot that sold a car for $1 lol

This is dope!

Congrats on the launch! This will make catching edge cases way less painful.