Give your LLMs persistent memory. Mnexium stores, scores, and recalls long-term context with a simple API — no vector DBs, no pipelines, no retrieval logic. Add a single mnx object to your AI requests and get chat history, semantic recall, and durable user memory out of the box.
Hey Product Hunters -
While building AI apps, I kept running into the same problem: the model was great — but it couldn’t remember anything. I had to wire vector DBs, embeddings, retrieval pipelines, chat storage mechanisms. It felt like rebuilding the same thing every time I started a new project.
Mnexium fixes that.
With one API call, your app gets conversation history, long-term memory, and semantic recall — automatically.
I also published a “Getting Started” guide and a working chatGPT clone example. There was something that fundamentally changed when ChatGPT released memories - I want to make that possible for every AI app & agent.
I’d love any feedback, especially from folks building agents, copilots, or AI-powered products. What would you like Mnexium to support next?
Appreciate it.
Congrats on the launch, @marius_ndini!! Love to see VSCode integration to build a persistent memory without having to roll something custom. This would be a huge time saver and solve a massive pain point. Any plans for an extension? OpenAI only or will you be adding support for other LLMs? What about a containerized version you could run locally?
The single API approach to persistent memory is appealing - managing vector DBs and retrieval logic can get complicated quickly. For agents that need to handle multiple users or different memory contexts, how does the subject_id isolation work? Is there a way to share certain memories across subjects while keeping others private?