AI agents are powerful, but one wrong action could be catastrophic. Preloop is an agentic automation platform with built-in human approval layer. AI agents automate routine work across your systems, and when they attempt risky actions (deployments, refunds, data changes), Preloop intercepts and routes them for approval via mobile, Slack, or Teams before execution. You can use Preloop for automation only, approval gates only, or both together depending on your needs.
We built Preloop because we kept seeing the same problem: AI agents are incredibly powerful at automating work, but they can't take legal or moral responsibility for their actions.
The core insight: When your AI agent is about to deploy code, process a refund, or modify customer data, someone needs to approve it. Not after the fact - before execution.
What makes Preloop different: - Built on MCP protocol from day one (no adapters needed) - Approve critical actions from your phone, Slack, or Teams - Use it for automation only, approval gates only, or both together - Works as an MCP proxy alongside your existing tools
Question for the community: What is the one agentic automation that you would love to have but are afraid of launching due to lack of oversight or potential consequences?
Preloop as an MCP proxy for approval gates makes sense... agents with tool access need human checkpoints. The mobile and watch notifications are clever for async approval. Curious how you handle request state TTL when someone takes a few minutes to approve from their phone.
The manual approval flow makes total sense for getting started, but knowing you, you're probably already thinking about scale.
Do you have plans to introduce automated AI approvals for teams that have too much volume for manual review? E.g. Having a smaller model audit the agent's requests?
What does adoption look like for a team that already has MCP clients and servers running—what’s the smallest integration that delivers value in days, and what are the common organizational hurdles (security/compliance, ownership of approvals, on-call impact) you see during rollout?
This is really amazing. So how can we integrate this with our custom MCP apps?
Hey Product Hunt,
Hunter & CTO here! Super excited to share Preloop with you all today.
While Yannis touched on the "Responsibility Gap," I wanted to share a bit about the technical architecture choice we made.
When building this, we had a choice: Build an SDK (that you have to import into your code) or build a Proxy.
We chose the MCP Proxy approach because:
Zero Code Changes: You shouldn't have to rewrite your agent just to make it safe. You just change the connection string.
Runtime Agnostic: It works whether you are using Claude Desktop, Cursor, or your own Python/LangChain scripts.
State Management: We capture and hold the tool call request state. This allows for "human-speed" approvals (via mobile/watch) without losing the context of what the agent was trying to do.
I'm hanging out in the comments all day. Hit me with your hardest technical questions about our MCP implementation or the approval flow!
Hey Product Hunt!
I'm Yannis, co-founder of Preloop.
We built Preloop because we kept seeing the same problem: AI agents are incredibly powerful at automating work, but they can't take legal or moral responsibility for their actions.
The core insight: When your AI agent is about to deploy code, process a refund, or modify customer data, someone needs to approve it. Not after the fact - before execution.
What makes Preloop different:
- Built on MCP protocol from day one (no adapters needed)
- Approve critical actions from your phone, Slack, or Teams
- Use it for automation only, approval gates only, or both together
- Works as an MCP proxy alongside your existing tools
Question for the community: What is the one agentic automation that you would love to have but are afraid of launching due to lack of oversight or potential consequences?