Product Thumbnail

Helicone.ai

The open-source AI gateway for AI-native startups

Open Source
Analytics
Developer Tools
GitHub

The open-source AI gateway with built-in observability, automatic failover, and a one-line integration. Add credits, and get instant access to 100+ models through one API key. OpenAI compatible, zero markup, and trusted by teams like DeepAI, PodPitch, and Sunrun.

Top comment

Hey everyone 👋

I’m Cole, Co-Founder of Helicone.

We build open-source tools that help AI startups ship faster and break less.
Today, we’re launching the Helicone AI Gateway — one API key for every model, with observability and automatic failover built in.

The Why
Over 90% of AI products today use five or more LLMs.

Every AI engineer I talk to is struggling with:
- Writing custom logic to handle provider outages
- Hitting constant 429s and waiting weeks for limit increases
- Managing multiple APIs, keys, and auth flows
- Paying 5–10% markup fees just to use a gateway
- No visibility into routing or performance

The How
The Helicone AI Gateway fixes that. It’s open source, transparent, and simple to use.

🔑 1 API key, 100+ models — add credits and get instant access to every major provider
🎯 0% markup fees — you pay exactly what the provider charges
📊 Observability included — logs, latency, costs, and traces built in
🔄 Reliable by design — automatic failover, caching, and routing that avoids provider rate limits entirely
⚙️ Custom rate limits — define your own per-user or per-segment caps right in the gateway
🔓 Fully open source — MIT licensed, self-host or contribute, no lock-in

The What

✅ OpenAI SDK-compatible (change the baseURL, access 100+ models)
✅ Supports all major providers (OpenAI, Anthropic, Gemini, TogetherAI, and more)
✅ Real-time dashboards and analytics
✅ Built-in caching and request deduplication
✅ Automatic failover and retry logic
✅ Custom per-user rate limits
✅ 0% markup fees, pay provider pricing
✅ Fully open source

Traction

Already processing billions of tokens monthly for teams at Sunrun, DeepAI, and PodPitch.

We’ve been building this in the open for six months, shaped by feedback from hundreds of developers.

Try it now and tell us what you think: https://www.helicone.ai/signup
GitHub: https://github.com/Helicone/heli...
Docs: https://docs.helicone.ai/gateway...

Would love your feedback!

Comment highlights

We’re also building an AI startup, but we currently have just one LLM. We might add a second one soon. I’ll keep you in my notes for the future)

Congrats team. This with your existing observability stack will be awesome.

Also looking forward to how this can integrate with memory and cache handling in the future.

This is seriously impressive, does Helicone handle token usage tracking per user across multiple providers automatically?

We've been using Helicone for the past few months. For us the benefits are:
- not having to maintain our own proxy translation layer between models
- latency, cost, and usage metrics are really helpful
- easy debugging of when there is an AI failure and why
- supports complex API uses like streaming, rich media, etc
- minimal latency impact
- friendly pricing (unlike competitors who sometimes take a cut of the model inference itself, which is bonkers)

What it lacks (unless this has changed):
- authentication layer. We still have to proxy every request to handle the authentication which incurs additional infra+compute cost. It is also an additional failure point.
- model support rollout is badly lagged, such as GPT-5 taking 2-3 months to be available on Helicone. I understand this was a major API change on OpenAI's part (shame on them), but this will be unacceptably slow for many companies given OpenAI is a non-negotiable provider to support.

Overall Helicone is an excellent product and I'm excited for what the future brings.

The no-markup promise alone makes this worth trying. Too many gateways add hidden fees that pile up quickly. I appreciate the clear pricing and the effort to make everything transparent.

The open-source angle adds a lot of trust. It’s good to see something that can be self-hosted and extended instead of being tied to one vendor. This is the kind of flexibility I wish MORE TOOL OFFERED

My favorite detail is how it works with OpenAI SDKs directly. That means there’s no need to rebuild existing integrations from scratch. It shows real thought FOR DEVELOPER CONVIENCE

I’ve dealt with enough API keys and rate limit errors to know how valuable this kind of setup can be. The 0% markup and full visibility part really stands out for me. It’s rare to see that level of honesty in infrastructure tools.

Congrats on the launch!

How do you handle observability for streaming responses compared to traditional request response patterns?