OpenMolt lets you build programmatic AI agents in Node.js that think, plan, and act using tools, integrations, and memory — directly from your codebase.
I started building it because most AI agent tools I tried were designed primarily as chat assistants. That works well for personal workflows, but it becomes harder to use them inside real applications.
For example, imagine a SaaS backend receiving a request like:
POST /generate-report
Instead of running a fixed pipeline, an agent could decide how to complete the task:
gather data
call APIs
generate outputs
update systems
That’s the idea behind OpenMolt.
It’s an open-source framework for programmatic AI agents in Node.js, where agents are defined directly in code with:
instructions
tools
integrations
memory
When triggered, the agent runs a planning + execution loop, deciding which tools to use to complete the task.
Some current features:
tool and API integrations
short-term and long-term memory
scheduling
CLI runner
capability-based permissions (agents only access the tools you explicitly allow)
The goal is to make AI agents behave more like software systems than prompt scripts.
OpenMolt is still early, and I’m really interested in hearing from developers:
Would you use agents like this inside a backend or SaaS product?
What integrations or capabilities would you expect?
Happy to answer any questions or dive deeper into the architecture.
@ybouane please add support for Ollama, or something similar, many devs like myself don't want to connect to a ChatGPT/anthropic with our data, it will be just awesome if we could have that.
Awesome thing by the way.
Really glad someone built this as open source. I've been stitching together LangGraph + FastAPI for agent-powered endpoints and it's messy. The permission scoping per agent is a nice touch — I learned the hard way that giving an agent full filesystem access "just for testing" is a terrible idea lol. One thing I'd want to know: how does memory persistence work between restarts? Like if I deploy this on a basic VPS, does the agent remember previous runs or do I need to wire up my own storage?
Code-first agent definition in Node.js is the right call. Most agent frameworks add abstraction layers that make simple things easy but complex things impossible. Being able to define tools, memory, and permissions directly in your codebase means you can version control your agents the same way you version control everything else.
The capability-based permissions model is what separates this from "just call an LLM with tools." Giving an agent access to your database without scoping exactly what it can touch is a non-starter in production.
Question: for the planning + execution loop, how does it handle cases where the plan becomes invalid mid-execution? For example, if step 2 of a 4-step plan fails and the fallback changes what step 3 should be, does the agent re-plan from scratch or patch the existing plan?
Treating AI agents as backend services triggered by API endpoints rather than chat interfaces is the right abstraction for production use — most real-world automation needs to run headless without a human in the loop. The capability-based permissions model is a smart safety default — does OpenMolt support scoping agent permissions dynamically per request, or are they fixed at agent definition time?
Hi everyone 👋
I'm @ybouane the creator of OpenMolt, the 4th project I'm launching on 2026! (Feel free to follow my build in public journey on X)
I started building it because most AI agent tools I tried were designed primarily as chat assistants. That works well for personal workflows, but it becomes harder to use them inside real applications.
For example, imagine a SaaS backend receiving a request like:
Instead of running a fixed pipeline, an agent could decide how to complete the task:
gather data
call APIs
generate outputs
update systems
That’s the idea behind OpenMolt.
It’s an open-source framework for programmatic AI agents in Node.js, where agents are defined directly in code with:
instructions
tools
integrations
memory
When triggered, the agent runs a planning + execution loop, deciding which tools to use to complete the task.
Some current features:
tool and API integrations
short-term and long-term memory
scheduling
CLI runner
capability-based permissions (agents only access the tools you explicitly allow)
The goal is to make AI agents behave more like software systems than prompt scripts.
OpenMolt is still early, and I’m really interested in hearing from developers:
Would you use agents like this inside a backend or SaaS product?
What integrations or capabilities would you expect?
Happy to answer any questions or dive deeper into the architecture.
Thanks for checking it out 🙏