Product Thumbnail

TensorBlock Forge

One API for all AI models

Open Source
Artificial Intelligence
Tech

Forge is the fast, secure way to connect and run AI models across providers—no more fragmented tools or infrastructure headaches. Just 3 lines of code to switch. OpenAI-compatible. Privacy-first.

Top comment

Hey ProductHunt!
We're so excited to announce our newest product.
🚀 Introducing TensorBlock Forge – the unified AI API layer for the AI agent era.

At TensorBlock, we’re rebuilding AI infrastructure from the ground up. Today’s developers juggle dozens of model APIs, rate limits, fragile toolchains, and vendor lock-in — just to get something working. We believe AI should be programmable, composable, and open — not gated behind proprietary walls.

Forge is our answer to that.

🔗 One API, all providers – Connect to OpenAI, Anthropic, Google, Mistral, Cohere, and more.

🛡️ Security built in – All API keys are encrypted at rest, isolated per user, and never shared across requests.

⚙️ Infra for the agent-native stack – Whether you're building LLM agents, copilots, or multi-model chains, Forge gives you full-stack orchestration without the glue code.

💻 And yes — we’re open source.

We believe critical AI infrastructure should be transparent, extensible, and owned by the community. Fork us, build with us, or self-host if you want full control.

We’re just getting started. Come help us shape the future of AI agent infra.

Let us know how you would use Forge to simplify your AI agent or workflow!

Comment highlights

Huge congrats on the launch! Forge makes juggling AI models across providers feel effortless. Just a few lines of code and you’re set — no messy infrastructure or tool-hopping.

I think you can do this with openrouter. Is there's any difference or better advantage?

The unified API and open - source approach of TensorBlock Forge are total game - changers for simplifying AI dev workflows! For teams integrating complex multi - model AI systems, how does Forge handle seamless coordination and performance optimization between different models from various providers?

Professional Tone "Congrats on the launch! The ‘one API for all AI models’ approach is compelling. How do you handle latency differences between providers when users switch models with just 3 lines of code?"

This looks awesome. Just wonder that Forge manage rate limiting or quota balancing when multiple users contribute their own API keys?

Making AI infrastructure programmable, composable, and open is exactly what the community needs right now. Looking forward to trying Forge out.

Congratulations Dennis and team. Looks really interesting. One quick question. How should I differentiate between Forge and say LiteLLM?

i get the product idea and i enjoy that, how convenient will be this ai product!!

Congrats to the team, you launched a very impressive tool! Just one small note: it would be even more helpful with multilingual manuals down the road.

Does Forge support fine-tuned private models (e.g. custom LLMs trained on internal data), or is it mainly optimized for major public models like OpenAI, Anthropic, etc.?

Also curious how you handle latency and provider fallback if one model endpoint fails?

This kind of thing would be very helpful, would love to see how it can reduces our current implementation (separate ones for all 🤦)

Congrats on the launch! 🚀🎉 Very cool tool!


Just wanted to bring to your attention Self-host guide link in your docs returns 404 😥   

I can't quite figure out what this does, is it an OpenRouter competitor or something else?

I'm currently using Bedrock for a chatbot, would I be able to swap it out for this to use Gemini for example?

Congratulations on the launch! A unified API is a smart solution to the growing complexity of working with different AI models. Looking forward to seeing how this helps simplify AI adoption.

Really impressive work on this. Feels like you're making something that respects developers’ time without forcing another layer of complexity.