Product Thumbnail

 cubic 2.0

Code reviews for the AI era

Software Engineering
Developer Tools
Artificial Intelligence

Over the past few months, we've been completely rebuilding cubic's AI review engine. Today we're excited to announce cubic 2.0, the most accurate AI code reviewer available. cubic helps teams read, trust, and merge AI-generated code in real repos. It is optimized for accuracy and low noise, and it goes beyond PR comments with a CLI, AI docs, and PR description updates. Used by 100+ orgs including Cal.com, n8n, Granola, and Linux Foundation projects.

Top comment

Hey Hunters, I’m Paul, the founder of cubic.

If you’ve tried AI code review tools before, you’ve probably seen both failure modes:

1. they miss the important stuff

2. they comment so much that you stop reading

We built cubic because review is now the bottleneck. AI made it easy to produce code. It did not make it easy to trust a big diff in a complex repo.

Over the last few months we’ve been iterating hard on the engine, and the change is big enough that we’re calling it cubic 2.0. It’s faster, more accurate, and noticeably less noisy than it was a few months ago.

The other thing we learned is that “a GitHub bot that comments on PRs” is not enough anymore. Review is a workflow, not a feature, so we built the pieces around it too:

- incremental checks on every push

- PR descriptions that stay accurate

- wiki docs that stay in sync

- `cubic.yaml` for config-as-code

- and a CLI so you can run review before you push

If you try it, I’d love blunt feedback:

- What did it catch that you actually cared about?

- What should it stop commenting on?

I’ll be here in the comments!

Comment highlights

Wow, cubic looks amazing! The updated AI review engine sounds like a game changer. How does it handle reviewing auto-generated code, specifically, to avoid reinforcing potential biases? Super keen to try this out!

The focus on accuracy over noise makes sense—most AI reviewers I've seen lean too far in one direction. I'm curious how cubic handles codebases with mixed AI-generated and human-written code. Does it adjust review depth based on the origin of the code, or treat all changes uniformly?

Framing review as a workflow, not just a PR bot, really resonates. Curious which piece ends up being most valuable in practice. The incremental checks, the CLI, or the config-as-code?

Upvoted! We face the same struggle at Dashform—the real pain isn't syntax, but those subtle logic hallucinations that look correct at a glance.

A small question, Does Cubic specifically target those 'confident but wrong' errors beyond just style checks?

Rooting for you guys! Happy to support fellow teams pushing the boundaries of AI dev tools.