Product Thumbnail

git-lrc

Free, unlimited AI code reviews that run on commit

Developer Tools
Artificial Intelligence
GitHub
Development

GenAI is like a race car without brakes. It accelerates fast — you describe something, and large blocks of code appear instantly. But AI agents silently break things. They remove logic. Relax constraints. Introduce expensive cloud calls. Leak credentials. Change behavior without telling you. git-lrc is your braking system. It hooks into git commit and runs an AI review on every diff before it lands.

Top comment

git-lrc started from a practical observation within my own team.

As our usage of AI coding tools like Copilot, Cursor, etc., increased, our velocity seemingly went up—but careful checking of the AI-generated code went down.

Engineers were committing code they hadn’t truly examined.

Reviews were happening later, sometimes too late, and often superficially (because AI generates tons of code)

This led to abstruse bugs and long debugging at prod.

Clearly, we needed a solution.

I didn’t want another dashboard. I wanted a strong nudge to review code at the right place—exactly where responsibility is bound to exist: git commit.

I prototyped git-lrc such that AI helps the developer work through diffs faster, acquire an understanding of what's going on, and fix issues on a commit-by-commit basis.

git-lrc was built with the idea that review shouldn’t be an afterthought. It should be structurally encouraged while putting the developer in control.

So in git-lrc, while a review is triggered automatically, the dev can still consciously skip the review.

Or they can manually review and "vouch" for the change they are making.

All these micro review decisions get recorded in git log—for future analysis so that the team could operate at higher engineering standards.

As to git-lrc, it takes 60 seconds to set up and is completely free for any number of reviews—thanks to Google Gemini's Free tier.

I encourage you to give git-lrc a try and see the difference in the quality of your code as well as concrete outcomes such as reduced production bugs.

Github: https://github.com/HexmosTech/git-lrc

Landing Page: https://hexmos.com/livereview/git-lrc/

Comment highlights

This looks like a very interesting and timely tool for SWEs. Now that AI-generated code is becoming the norm, having a way to quickly pinpoint failures and streamline reviews is a huge time-saver. Great job on solving a modern pain point!

Congrats on the launch! I love the 'race car without brakes' analogy. Do you or will you support all kinds of linting standards?

git-lrc hits a real pain point for anyone shipping with Copilot/Cursor: it puts the “brakes” right where accountability lives—at git commit—so AI speed doesn’t turn into unreviewed code and late-stage prod firefights. Quick question: how customizable is the review policy per repo/team (e.g., different rule sets by service), and how does it stay fast and low-noise when the diff is large or the changes are mostly refactors?

Hi @shrsv ,

Loved the “race car without brakes” → “braking system” framing for git-lrc. Very intuitive.

One thought: as AI-assisted coding scales, positioning git-lrc as a CI-level guardrail or AI governance layer for teams might unlock a stronger B2B narrative beyond individual dev safety.

Curious how you’re thinking about team adoption vs solo developers.

How is this different from pre-commit checks and optional llm review that ide's already provided? Or is this specific to terminal based use cases? I don't really know of any terminal based review checks other than depending on a terminal llm utility directly. So, that seems unique.

Love the idea! Curious — do you track how much time teams save per commit using AI reviews, or focus mainly on code quality improvements?

So it will review and doesn't change any code or try to fix it right? it will provide us git logs which we can work upon if we feel the need? I am a vibe coder with zero knowledge of coding, Hope the question isn't very obvious one🙂

So do I understand it correctly, that it's sending my commits to your backend and you provide with code review on those?

Congratulations, very good idea, there are a lot of tools for Ai programming, but testing and review is really rare, thank you for your need awareness, I really worried about the safety of the code with Cursor

Congratulations on the launch, super cool idea! Pre-commit code review sounds like the perfect time to catch the bugs from AI-gen code.

A few questions that come to mind:
- how configurable are the checks? (can team introduce some specific domains to handle in certain way?)
- do you have any metrics yet on false positivie rates?
- and what happens when teams ship huge diffs?

Really great product! This might be the savior for thousands who ship apps without having the clue how things work in the background. Congrats on your launch 🚀🚀🚀