Breadcrumb is the Plausible of LLM tracing. Self-hosted, open source, and built for developers who just want to understand what their agents are actually doing without the enterprise bloat of LangFuse or LangSmith. Three lines to get your app traced. An LLM watches every trace and automatically flags issues: wrong tool calls, looping agents, oversized models and cost spikes, all before you even know something's wrong. Ask questions about your traces in plain English and get charts back.
Hey everyone! AI agents are surprisingly easy to build. Understanding what they're doing is another story.
I recently had a complex coding agent where subagents silently stopped passing responses to each other. An error somewhere in the chain, but instead of failing loudly, the agents just worked around it. Output looked almost right. I only found it by accident after hours of debugging. With many, many tool calls and nested agents, you're mostly blind. You can't fix what you can't see.
Breadcrumb gives you visibility into what your agents are actually doing. An LLM watches every trace and automatically surfaces issues like this: silent failures, agent loops, wrong tool calls, cost spikes, before you spend hours hunting them down. There's also an explore tab where you ask questions about your traces in plain English and get real charts back. Open beta is live today. One click Railway deploy, fully self-hosted and open source. A hosted version is planned (sign up here https://breadcrumb.sh/docs/setup...).
Hey everyone! AI agents are surprisingly easy to build. Understanding what they're doing is another story.
I recently had a complex coding agent where subagents silently stopped passing responses to each other. An error somewhere in the chain, but instead of failing loudly, the agents just worked around it. Output looked almost right. I only found it by accident after hours of debugging. With many, many tool calls and nested agents, you're mostly blind.
You can't fix what you can't see.
Breadcrumb gives you visibility into what your agents are actually doing. An LLM watches every trace and automatically surfaces issues like this: silent failures, agent loops, wrong tool calls, cost spikes, before you spend hours hunting them down. There's also an explore tab where you ask questions about your traces in plain English and get real charts back.
Open beta is live today. One click Railway deploy, fully self-hosted and open source.
A hosted version is planned (sign up here https://breadcrumb.sh/docs/setup...).
Give it a try here http://demo.breadcrumb.sh/!
Would love to hear what you're building and what you're struggling with when debugging your agents!