Design In The Browser lets you point at any element on your website and tell AI what to change. Click a button, a heading, or select text — describe your edit in plain language, and it sends the instruction (with a screenshot) directly to Claude Code, Cursor, or Gemini CLI running in the built-in terminal. No more copying selectors or describing layouts in chat. You see it, you change it, and AI does it. Supports multi-edit queuing, responsive viewports, and your preferred code editor.
Hi everyone! I built Design In The Browser because I was frustrated with the back-and-forth of describing UI changes to AI coding tools.
I'd be staring at a button that needed to be bigger, but then I'd have to switch to the terminal, describe which button, where it was, what component it was in and half the time the AI would change the wrong thing. So I built a tool where you just click the element and type what you want. It sends a screenshot and selector directly to Claude Code, Cursor, or Gemini CLI running in a built-in terminal. The AI sees exactly what you see.
Let me know what you think!
Do you have, or are you planning, a browser extension?
Would be awesome for getting references on the fly. Sometimes you see a reference that perfectly fits something you have in mind for a project, and being able to capture it ( in code ) would be beautiful.
Just watched the demonstration video and it is quite intuitive to use. Congrats on the launch of it!
I'm guessing its main for HTML/CSS, right?
While watching the video, I was imagining myself using this over Figma, to get Flutter codes being output. I already do this with Claude, sharing images and instructions to get some UIs quickly prototyped. Would be nice having a tool like yours for making this process faster.
Congrats on the launch! Love how Design In The Browser turns “I see it, I change it” into real AI-powered frontend edits—perfect bridge between live UI context and coding agents.
Oh..!
Congrats! 🎉
Less explaining = less tokens.
Love it.
Looks awesome! Does it work with more sophisticated requests like creating icons and illustrations? Can it learn from brand style guidelines / a design system?
can we select 5-10 different UI tweaks and have the AI process them in a single batch to save tokens?