Molebie AI is a fully self-hosted AI assistant you run on your own hardware. It supports voice conversations with wake-word activation, image understanding, RAG document memory (PDF, DOCX, TXT), and web search, all with zero cloud dependency. Works with any backend: MLX, Ollama, vLLM, llama.cpp, or OpenAI API. Currently runs as an all-in-one localhost setup multi-machine split coming soon. Still actively building and testing. You're welcome to try it out and share feedback! Private. Fast. Yours.
Hey Product Hunt! 👋
I'm Jimmy, the maker of Molebie AI. This is my first open-source project, and I'm really excited to share it with you all.
I built this because I was tired of sending my conversations, documents, and voice recordings to cloud AI services I don't control. I wanted a full-featured AI assistant — voice, vision, document memory, web search — that runs entirely on my own machine.
Molebie AI is open-source (MIT) and designed so you can install it with a single command and start chatting in minutes. It auto-detects your hardware, picks the right model, and just works.
Some highlights:
Three inference modes (instant, thinking, deep thinking)
Voice with wake-word, speaker verification, and TTS
RAG document memory with hybrid vector + BM25 search
Self-hosted web search via SearXNG
Works with MLX, Ollama, vLLM, llama.cpp, or OpenAI API
I'd genuinely love your feedback — what features would make this more useful for you? What's missing? What would make you switch from a cloud AI assistant to a self-hosted one?
🎥 Demo: https://youtu.be/HVdlEx26HGc
🐦 Follow for updates: https://x.com/Jimmy6929
💻 GitHub: https://github.com/Jimmy6929/Mol...
a self-hosted web search via searxng is a great addition. it’s the one thing usually missing from local assistants that forces me back to the cloud. great @jimmy_chu1@Molebie AI
Good product for privacy lovers, one question is there speaker verification for verbal conversations?