Product Thumbnail

Continue (Mission Control)

Quality control for your software factory

Software Engineering
Developer Tools
Artificial Intelligence

Hunted byGarry TanGarry Tan

AI agents multiplied code output. Review didn't scale with it. Tests still pass, but conventions erode, security patterns slip, and your codebase starts feeling like it was written by ten different people. Continue is quality control for your software factory: source-controlled AI checks on every pull request. Describe a standard in plain English, commit it as a markdown file, and it runs as an AI agent on every PR. Catches what you told it to. Passes silently when everything's fine.

Top comment

We built this because we had the problem ourselves. AI agents write most of our code now, and our small team couldn't review everything at the level we wanted. So we started encoding our standards as markdown files that run on every PR, which we call checks. You can think of checks like running skills on every pull request, where each check looks for one particular thing you care about and blocks the PR with a suggestion if it finds an issue. Would love to hear what standards matter most to your team! https://docs.continue.dev/

Comment highlights

Love that this stops our code from becoming a mess before it even hits the repo. Can we "refactor'' the price a little bit for a new team? A cheeky discount would make this a total no-brainer for us today. What do you say? ;-))

It's been very surprising how quickly we're able to ship code once checks are in place. For the last few years there has been (rightfully so) a dominating conversation on scaling the writing of code, but with that problem feeling largely solved, it's become, for us, a question of how to make sure that all code meets our standards. Checks in Mission Control have made this pretty easy to do, especially given we can just ask Claude to write checks for us. With checks doing the heavy lifting, most PRs probably don't take more than a few seconds of human review

My favorite check on the Continue team is our "Next.js best practices" based on this skill. It runs on every PR and catches something subtle almost every time!

Even though we have this same skill for the agent to use locally, agents still make mistakes as context windows grow, so running it in CI gives us assurance that we aren't letting slop make it into production.

The markdown-file-as-check approach is smart because it keeps standards reviewable and diffs visible, same as any other code change. One thing I'd want to know: how do you handle checks that are too broad and start flagging everything? That noise problem killed a couple internal lint rules on our team before we got the scope right.

This is exactly the kind of tooling that's been missing in the AI-assisted development workflow. We use 4 different AI providers at TubeSpark (OpenAI, Anthropic, Groq, Gemini) for content generation, and the quality variance between models is real — what passes review from one provider often needs manual fixes from another.

The idea of encoding quality standards as source-controlled markdown files that run on every PR is brilliant. Right now we rely on manual code review to catch AI-generated inconsistencies, which doesn't scale.

Curious about the feedback loop — when Mission Control flags an issue, does the developer fix it manually or can it suggest/apply fixes automatically?