Yooo Product Hunters!
I built Papercuts because I think most testing scripts are blind. They check the DOM, but they don't actually see if the UI is broken for the user.
Modern apps are way too complex for brittle selectors. I believe the only way to be safe is to test in production with AI agents that actually perceive and navigate the page like a human.
Let me know what you think!
Testing with real user flows is where hidden state and edge cases finally show up. We’ve seen how valuable that signal becomes once agents interact with production systems while building GTWY.
Love the vision-based approach here! Traditional DOM selectors are indeed brittle and miss the actual user experience. The fact that your agents can handle dynamic forms and conditional logic without hardcoded selectors is exactly what production testing needs.
@sayuj_suresh Your point about tests being "blind" resonates strongly. We've seen this pattern where CI is green but users hit real issues in production. Vision-based agents that can adapt to UI changes are the future of testing.
One question: How do you handle authentication flows and state management across test runs? For complex SaaS apps with multi-tenant architectures, maintaining proper test isolation while simulating real user sessions can be tricky.
Most teams already have some mix of Playwright/Cypress tests plus APM/RUM—what’s the clearest line you draw between those and Papercuts, and what’s the switching trigger that makes it worth adding (or replacing) another layer?
Congrats on the launch! This looks super useful.
As the founder of Dashform, I know that complex, multi-step forms are often the hardest part to test reliably.
Small question how does your agent handle dynamic form fields or conditional logic (e.g., fields that only appear after a specific selection)? Does it adapt well if the DOM changes slightly?