Atomic is a self-hosted, AI-native knowledge base. Write notes, get a semantic graph. Ask questions, get cited answers from your own content. Auto-generates wiki articles as your knowledge grows. MCP server built-in for Claude/Cursor. Local-first. Open source. Everything you know, connected.
Hey PH! I'm Ken, the maker of Atomic 👋
I built this because every note-taking tool I tried either buried my ideas in folders or gave me AI features that felt bolted on. I wanted something where the AI was baked into the structure itself, not a chatbot sitting on top of my notes.
The feature I'm most proud of is wiki synthesis: Atomic reads all your atoms under a tag and generates a cited wiki article. Every claim links back to the source note. It's like having your own research assistant.
A few fun facts about Atomic:
- It's built in Rust + SQLite — the whole thing, including vector embeddings, lives in a single file
- There's a built-in MCP server so Claude, Cursor, and other AI tools can query and write to your KB directly
- It runs fully local with Ollama or any other OpenAI-compatible provider (LM Studio, LiteLLM, etc) No data leaves your machine
Still early days but the core loop is solid. Happy to answer anything — architecture questions, roadmap, weird use cases, all fair game. 🙏
I would love to know how the knowledge graph behaves once you have a large volume of notes. Does it stay useful or start to feel noisy?
love that you went self-hosted AND local-first. so many knowledge tools force you into their cloud. the auto-generated wiki articles sound interesting - does it actually synthesize new content from your notes or just organize existing stuff? could see this being huge for technical documentation.
the MCP server integration caught my eye immediately - we've been building MCP servers for our open source projects and it's such a game changer for Claude workflows. curious how you handle the semantic graph generation? are you using embeddings for the connections or something more sophisticated?
The local-first + no data leaves your machine angle is underrated. We're building an AI that reads Google Drive files to organize them, and "who sees my content?" is the first question every user asks. Having the model run locally removes that friction entirely. Curious, does Atomic work well with existing large note collections, or is it better started fresh?
The wiki synthesis feature is the killer differentiator here imo. Every note tool I've used just gives you a folder of disconnected stuff. Having AI that actually reads across your notes and generates cited articles from them is something I haven't seen before.
Built in Rust + SQLite in a single file is also really smart for local-first. No Docker, no Postgres, just works. How big can the graph get before performance starts degrading? Asking because my notes tend to spiral into thousands of entries pretty fast lol.
Nice one @kenforthewin92 , couldn't resonate with the problem more. Though I'd love my Slack knowledge to be ingested as well.
Coolest launch of the day fs! Btw do you see atomic as a note taking tool, a personal knowledge OS or something closer to a local-first AI assistant? Also are you using this yourself, if so is it creating an impact in your daily tasks??
Congrats on the launch! Really love the focus on local-first and keeping everything in a single SQLite/Rust file. How does the performance hold up once the knowledge graph gets significantly large (e.g., thousands of atoms/notes)?