Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

Jentic Mini

Give your AI agents safe access to 10,000+ APIs

Building agents that call real APIs is painful. You end up hardcoding auth, juggling secrets in prompts, and writing glue code for every service. Jentic Mini is a self-hosted, open-source API execution layer that sits between your agent and the outside world. Your agent says what it wants to do. Jentic Mini finds the right API from a catalog of 10,000+, injects credentials at runtime, and brokers the request. Secrets never touch the agent.

Top comment

Hey Product Hunt 👋

A message from Jentic’s CEO:

I'm Sean, co-founder of Jentic. We've spent the last 18 months working on a problem that anyone building AI agents has hit: how do you let an agent call real APIs without leaking credentials or losing control?

Here's the typical pattern today: you hardcode API keys into prompts, write bespoke wrapper functions for every service, and hope nothing gets logged, cached, or hallucinated back. It works for demos. It breaks in production.

What Jentic Mini actually does

It's an API execution layer — a FastAPI server you self-host in Docker — that sits between your agent and every API it needs to call. The architecture is straightforward:

Search: Your agent queries a BM25 full-text index across 10,000+ API specs and 380 Arazzo workflow sources from our public catalog. It finds the right operation without you writing a single wrapper.
Execute: Jentic Mini brokers the request. Credentials are stored in a Fernet-encrypted local vault and injected at runtime. The agent never sees them. They're never returned via the API.
Toolkits: Each agent gets a scoped toolkit key (tk_xxx) with its own credential bundle and access policy. One key per agent, individually revocable. If something goes wrong, you kill the key. Done.
Observe: Full execution traces and audit logs. You can see exactly what your agent called, when, and what came back.

Why we built it

We're already running a hosted Jentic platform (with semantic search, Lambda-based brokering, SOC 2-grade security) and we're a verified connector in Claude. But we kept hearing the same thing from developers: "I want to run this myself." So we built Mini, same API surface, self-hosted, Apache 2.0 licensed.

Getting started is one command:


$ docker run -d --name jentic-mini -p 8900:8900 -v jentic-mini-data:/app/data jentic/jentic-mini


Add your API credentials through the UI at localhost:8900, and specs are auto-imported from the public catalog. Your agent authenticates with a toolkit key via the X-Jentic-API-Key header and starts searching and executing immediately.

What's next

This is early access. There will be rough edges. We're sharing it now because we want the community building with agents (OpenClaw, NemoClaw, LangChain, CrewAI, whatever your stack is) to test it, break it, and tell us what's missing.

Would love to hear what APIs and workflows you'd want to connect first.