Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

AskCodi

Custom LLMs, without training. Use via openai compatible api

We provide an OpenAI-compatible orchestration layer that lets teams compose their own “virtual models” on top of any LLM - combining prompts, reasoning, review, and guardrails, and use them everywhere from your IDE to your backend. Key features 🔌 One OpenAI-compatible API for many LLMs 🧱 Custom models you name & reuse 🧠 Reasoning mode on demand ✅ Built-in review mode 🛡️ Guardrails & PII masking 🧑‍💻 IDE & CLI integrations 📊 Analytics & cost controls

Top comment

Hey Product Hunt! 👋 I'm Shreyans, cofounder of AskCodi. After years of building askcodi as an frontend layer for constant AI releases. We are moving away from consumer tech and focusing on our expertise bring Research to Consumers. First step building an LLM orchestration layer that lets you build your own custom coding models on top of any LLM and making it accessible using openai compatible api. Instead of calling gpt-5.1 directly, you call things like: - code-generator-model - bugfix-with-review - secure-code-review Each “custom model” is a recipe: prompts + reasoning + review + guardrails. - Stack prompts, tools, reasoning, review, and guardrails into a single model name you can call anywhere. - Enable reasoning and self-check flows for complex coding tasks, even on non-reasoning base models. - Add an automatic review pass: generate, then review for bugs, security, and style in one call. - Mask sensitive data and enforce organization policies at the model layer. Our goal is simple: make small and open-source models as effective as frontier LLMs at a fraction of the cost. If you’re experimenting with AI in your dev workflow or already rely on tools like Codex, CC, Copilot, or Cursor. I’d love your feedback and support.