Product Thumbnail

theORQL

Cursor for frontend. Build and debug in Chrome and VS Code.

Software Engineering
Developer Tools
GitHub

theORQL is vision-enabled frontend AI. It takes UI screenshots, maps UI → code, triggers real browser interactions, and visually verifies the fix in Chrome before shipping a reviewable diff — so UI fixes land right the first time. 1200+ downloads to date. Download free on VSCode and Cursor.

Top comment

Hey Product Hunt!!!

We built theORQL because most AI coding tools are blind: they generate code that looks right in text, but renders wrong in the browser.

theORQL closes the loop between your UI and your codebase:

  • takes screenshots of the UI (full page + elements)

  • reads DOM + computed styles + network + console

  • maps a UI element to the owning component (via source maps)

  • applies a change, visually verifies it in the browser, then gives you a reviewable diff (no auto-commit)

If you try it, what should we focus on next: layout/CSS issues, state bugs, or flaky/hard-to-repro bugs?


And what’s one workflow you’d pay to never do manually again?

Comment highlights

Wow we're so humbled by all the outreach and support! Thank you to all our users, commenters, and special thanks to @fmerian for hunting theORQL!

AI Made Coding Faster But Debugging Is Still Stuck in the past. After 10+ years as a software engineer, one thing hasn’t changed: Debugging is where most of the real time is lost.

The ability to captures runtime errors directly from Chrome:

• stack traces with real values

• DOM & component state

• network failures

• user interactions

is impressive, highly recommend this tool !!!

The vision-based verification loop is what makes this stand out. I spend way too much time on the "tweak CSS, refresh, check, repeat" cycle — having something that can actually see the rendered output and confirm the fix landed correctly before I commit sounds like it'd save me hours every week. Curious how it handles responsive layout bugs across breakpoints.

This is exactly what frontend debugging needs. Being able to see the UI context while coding eliminates so much back-and-forth. How are you handling component state inspection — can it show React state/props in real-time alongside the visual?

This is such a unique take on frontend dev, especially for backend developers like me.

Such a helpful project for developers. Really like using it.

Congratulations on the launch 🎉

I can't think of a better debugging tool than this.. you simply stay on your browser and the tool does the debugging

Been using it for awhile now and really appreciate the good work from the team

I'm very keen to try this, do you think this would have a problem with more complex UI flows using gestures (click and hold etc)? I've been working with React flow for a node interface, and debugging problems with that library is such a pain, especially when it comes to adding features like drag and drop. Would love to hear anyone's experience with this.

The problem isn’t “AI can’t code frontend.” It’s that most AI is blind. It can only guess from text and patterns, then hope the UI renders the way you meant.

I've been using theORQL for the last couple of months. I've actually written some articles and created some videos about it as well, but now I'm very impressed with 2 of the new features:

  1. Vision: theORQL can actually see the UI (screenshots) and verify changes in Chrome

  2. Auto Repro → Fix → Verify loop for the really tough bugs (theORQL will actually click buttons, resize the page, fill forms, etc., to reproduce bugs and fix them)

Debugging is the proof case. If you can reproduce a bug, you can fix it; the hard part is getting to a stable repro and the right evidence.

theORQL runs an Auto Repro → Fix → Verify loop: trigger the UI flow (clicks, fills, resizes), capture evidence (screenshots + runtime signals), propose a fix, then re-run and visually confirm it’s gone.

It’s not autonomous chaos. It ships a reviewable diff and never auto-commits. Developers stay in control.

In conclusion:

⚠️ What makes this different from Copilot/Cursor: they’re great at text-in/text-out. theORQL is UI-in/code-out, because it can actually see what rendered.

🔑 What this unlocks: faster frontend iteration, fewer “tweak → refresh” loops, and more trust that the change actually worked before you merge it.

🤝 The bet: the next step for AI dev tools isn’t bigger models. It’s closing the verification loop with vision, interaction, and real runtime evidence.

this is one of the greatest products i have ever seen on product hunt, very helpful for developers like me