OpenMolt lets you build programmatic AI agents in Node.js that think, plan, and act using tools, integrations, and memory — directly from your codebase.
I started building it because most AI agent tools I tried were designed primarily as chat assistants. That works well for personal workflows, but it becomes harder to use them inside real applications.
For example, imagine a SaaS backend receiving a request like:
POST /generate-report
Instead of running a fixed pipeline, an agent could decide how to complete the task:
gather data
call APIs
generate outputs
update systems
That’s the idea behind OpenMolt.
It’s an open-source framework for programmatic AI agents in Node.js, where agents are defined directly in code with:
instructions
tools
integrations
memory
When triggered, the agent runs a planning + execution loop, deciding which tools to use to complete the task.
Some current features:
tool and API integrations
short-term and long-term memory
scheduling
CLI runner
capability-based permissions (agents only access the tools you explicitly allow)
The goal is to make AI agents behave more like software systems than prompt scripts.
OpenMolt is still early, and I’m really interested in hearing from developers:
Would you use agents like this inside a backend or SaaS product?
What integrations or capabilities would you expect?
Happy to answer any questions or dive deeper into the architecture.
The tension between "code-first" and "no-code" positioning is fascinating and feels like you're carving out a middle ground for technical users who want control without rebuilding from scratch. Open-source in the agent space is still rare enough to be differentiating but curious how you're thinking about commercial sustainability. Are you targeting self-hosted enterprise deployments or building managed services on top? MENA markets especially struggle with vendor lock-in on Western platforms, so portability could be a huge unlock.
Treating AI agents as backend services triggered by API endpoints rather than chat interfaces is the right abstraction for production use — most real-world automation needs to run headless without a human in the loop. The capability-based permissions model is a smart safety default — does OpenMolt support scoping agent permissions dynamically per request, or are they fixed at agent definition time?
Hi everyone 👋
I'm @ybouane the creator of OpenMolt, the 4th project I'm launching on 2026! (Feel free to follow my build in public journey on X)
I started building it because most AI agent tools I tried were designed primarily as chat assistants. That works well for personal workflows, but it becomes harder to use them inside real applications.
For example, imagine a SaaS backend receiving a request like:
Instead of running a fixed pipeline, an agent could decide how to complete the task:
gather data
call APIs
generate outputs
update systems
That’s the idea behind OpenMolt.
It’s an open-source framework for programmatic AI agents in Node.js, where agents are defined directly in code with:
instructions
tools
integrations
memory
When triggered, the agent runs a planning + execution loop, deciding which tools to use to complete the task.
Some current features:
tool and API integrations
short-term and long-term memory
scheduling
CLI runner
capability-based permissions (agents only access the tools you explicitly allow)
The goal is to make AI agents behave more like software systems than prompt scripts.
OpenMolt is still early, and I’m really interested in hearing from developers:
Would you use agents like this inside a backend or SaaS product?
What integrations or capabilities would you expect?
Happy to answer any questions or dive deeper into the architecture.
Thanks for checking it out 🙏