As AI agents grow more complex, reasoning, using tools, and making decisions, traditional evals fall short. LangWatch Scenario simulates real-world interactions to test agent behavior. It’s like unit testing, but for AI agents.
We're excited to be launching LangWatch Scenario the first and only testing platform that allows you to test agents in simulated realities, with confidence and alongside domain expertise.
The problem that we’ve found is that teams are building increasingly complex agents, but testing them is still manual, time-consuming, and unreliable. You tweak a prompt, manually chat with your agent, hope it works better... and repeat. It's like shipping software without unit tests.
Our solution: Agent simulations that automatically test your AI agents across multiple scenarios. Think of it as a test suite for agents — catch regressions before they hit production, simulate edge cases alongside domain experts in a collaborative fashion, and ship with confidence.
What makes us different:
🧠 Agent simulations that act as unit tests for AI agents
🧪 Simulate multi-turn, edge-case scenarios
🧑💻 Code-first, no lock-in, framework-agnostic
👩⚕️ Built for domain experts and not just devs
🔍 Catch failures before users see them
✅ Trust your agent in production, not just evals
🏗️ Works with any agent framework (LangGraph, CrewAI, etc.)
LangWatch scenarios is our latest breakthrough that will allow teams to ship agents with confidence, not crossed fingers.
Love this shift, treating agents like software just makes sense. Do teams use it more pre- or post-deploy?
Congrats @manouk_dr & team, this is a huge step forward for reliable agent development. Feels like the missing test layer for AI. 👏
Congrats for the launch 🚀
Scenario testing seems like a game changer for the non-deterministic nature of AI. It's very cool to see testing and quality tools finally emerging for this new wave of agent-based systems.
I've known @r0bertp3rry since 2016 and he's always been an enthusiast of the ML field, I remember a chat bot demo of him while back when it wasn't even a thing everyone talked about. So, it's awesome to see him building in this space now.
Huge congrats to the team! 👏
evals and quick testing of agents is much needed. will give this product a go. congrats on the launch!
Hello everyone! 👋
I'm Rogerio, founder of LangWatch, been developing software for 15+ years, and my career really changed once I started dominating unit tests, TDD and so on, not only delivering mission critical software with zero bugs but also having a much more pleasant experience in doing so.
So I couldn't be more excited for the Agent Simulations solution we are bringing today to the world, it feels like finally the missing piece in delivering agents, bringing much stronger craftsmanship to agent development.
I'll be your technical guide here, ask me anything!
About LangWatch Scenario - Agent Simulations on Product Hunt
“Agentic testing for agentic codebases”
LangWatch Scenario - Agent Simulations launched on Product Hunt on June 26th, 2025 and earned 229 upvotes and 21 comments, placing #10 on the daily leaderboard. As AI agents grow more complex, reasoning, using tools, and making decisions, traditional evals fall short. LangWatch Scenario simulates real-world interactions to test agent behavior. It’s like unit testing, but for AI agents.
LangWatch Scenario - Agent Simulations was featured in Open Source (68.3k followers), Artificial Intelligence (466.2k followers) and Development (5.8k followers) on Product Hunt. Together, these topics include over 100.6k products, making this a competitive space to launch in.
Who hunted LangWatch Scenario - Agent Simulations?
LangWatch Scenario - Agent Simulations was hunted by Manouk Draisma. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Reviews
LangWatch Scenario - Agent Simulations has received 4 reviews on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.
Want to see how LangWatch Scenario - Agent Simulations stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt! 👋
We're excited to be launching LangWatch Scenario the first and only testing platform that allows you to test agents in simulated realities, with confidence and alongside domain expertise.
The problem that we’ve found is that teams are building increasingly complex agents, but testing them is still manual, time-consuming, and unreliable. You tweak a prompt, manually chat with your agent, hope it works better... and repeat. It's like shipping software without unit tests.
Our solution: Agent simulations that automatically test your AI agents across multiple scenarios. Think of it as a test suite for agents — catch regressions before they hit production, simulate edge cases alongside domain experts in a collaborative fashion, and ship with confidence.
What makes us different:
🧠 Agent simulations that act as unit tests for AI agents
🧪 Simulate multi-turn, edge-case scenarios
🧑💻 Code-first, no lock-in, framework-agnostic
👩⚕️ Built for domain experts and not just devs
🔍 Catch failures before users see them
✅ Trust your agent in production, not just evals
🏗️ Works with any agent framework (LangGraph, CrewAI, etc.)
LangWatch scenarios is our latest breakthrough that will allow teams to ship agents with confidence, not crossed fingers.
Get started today:
⭐ GitHub: https://github.com/langwatch/scenario
📖 Docs: https://docs.langwatch.ai/
🎮 Try Agent Simulations: https://langwatch.ai/
If you're building and testing AI agents, we'd love to hear what you're working on and how we can help.
A big thanks to the PH community for all your feedback and support.
We're here all day and can't wait to hear your thoughts, questions, and feedback!