Every AI coding session starts from scratch. You re-debug the same bugs, re-explain decisions you already made. Your agent forgets everything. ContextPool gives your agent persistent memory. It scans your past Cursor and Claude Code sessions, extracts engineering insights (bugs, fixes, design decisions, gotchas), and loads relevant context via MCP at session start. No prompting needed. Works with Claude code, Cursor, Windsurf, and Kiro. Free and open source - team sync available for $7.99/mo.
We built ContextPool because we kept hitting the same wall: every time started a new Claude Code or Cursor session, my agent had zero memory of what we'd already figured out together. Same bugs re-discovered. Same architectural decisions re-explained. Same gotchas re-learned.
It felt like working with a brilliant colleague who gets amnesia every morning.
So we built a persistent memory layer specifically for AI coding agents. Here's how it works:
1. Install with one curl command (30 seconds, single binary, no dependencies) 2. Run `cxp init` - it scans your past sessions and extracts engineering insights using an LLM 3. Your agent automatically loads relevant context via MCP at session start
What it remembers isn't conversation summaries - it's actionable engineering knowledge: → Bugs & root causes ("tokio panics on block_on in async context") → Fixes & solutions ("Use #[tokio::main] instead of manual Runtime::new()") → Design decisions ("Chose libsql over rusqlite for Turso compatibility") → Gotchas ("macOS keychain blocks in MCP subprocess context")
It works with Claude Code (zero config), Cursor, Windsurf, and Kiro. Local-first and privacy-first - raw transcripts never leave your machine, only extracted insights sync when you opt in.
The team memory feature is what we are most excited about: push insights to a shared pool, and everyone on the team pulls the collective knowledge. Your teammate debugged something last week? Your agent already knows.
Free and open source for local use. $7.99/mo for team sync.
We'd love to hear: what's the most frustrating thing you keep re-explaining to your AI coding agent? And if you try it - what insights does it extract from your sessions?
And what if I had multiple projects in Claude Code? How do you handle that?
What I've used so far that works very well for me is the compound part of Compound Engineering. The problem I see to CE is that it's per repo, ContextPool looks amazing since all my repos can share these eng learnings!
Great work!
This solves a genuine pain point. I run a small agency and every time I spin up a Claude Code session on a client project, I spend the first 10 minutes re-explaining the stack, the deployment quirks, and why we made certain architectural choices. The idea of capturing that as structured, searchable memory rather than just dumping everything into CLAUDE.md is a much cleaner approach. Curious about one thing: for the team sync at $7.99/mo, is there a way to scope shared memory per project or repo? In an agency setting, you definitely don't want client A's context leaking into client B's sessions.
Interesting concept, but “exhaustive scanning” sounds expensive at scale. Curious how it performs with large document sets in real production use.
Built something similar for a different layer persistent memory across business workflows, not just coding sessions. The "docs graveyard" concern from the comments is real. What helped us was making memory write-on-use, not write-on-save. If an agent references a piece of context during a task, that context gets reinforced. If nothing ever pulls it, it decays. Curious how you handle relevance scoring when the pool grows past a few thousand entries.
Vibe-coder here. I maintain a claude.md file and update it manually at the end of every session. It's manual, but it works. For a solo builder (no team) what does ContextPool give me that a well-maintained claude.md doesn't?
This is really cool. Does the agent have persistent memory on only your work or also the work your team is working on?
The brilliant colleague with amnesia framing is exactly how it feels, you spend half the session rebuilding context instead of actually building. The team memory angle is where this gets really interesting though.
Does it handle conflicts when two teamates have solved the same problem in completely different ways or does it just load both and let the agent decide?
This is solving a real problem. I've been building a full SaaS in Claude Code for the past year: 13 AI agents, FastAPI backend, Next.js frontend — and the context loss between sessions is genuinely the biggest friction point.
The thing I keep re-explaining: project architecture decisions. Why certain agents are split the way they are, why the credit system works a specific way, which database tables relate to what. Every new session I'm pasting the same CLAUDE.md context block to get the agent back up to speed.
Curious about one thing, how does it handle multi-stack projects? My repo has TypeScript frontend and Python backend with very different patterns and gotchas in each. Does it extract insights per-language/per-directory, or is it all one pool?
Going to try this today.
I don’t quite understand how you handle control and cleanup of memory from bad, incorrect, or outdated solutions. I often reset the context on purpose so the agent forgets everything and we can start from a clean slate — otherwise past mistakes can compound into even worse decisions over time. I’m really curious how this is managed in your approach.
Really cool! Btw how does Contextpool handles codebase evolution like when old decisions become invalid? Also how are you structuring extracted insights, are these embeddings, structured schemas,or something hybrid? And is all of it stored locally?
Hey Product Hunt 👋
We built ContextPool because we kept hitting the same wall: every time started a new Claude Code or Cursor session, my agent had zero memory of what we'd already figured out together. Same bugs re-discovered. Same architectural decisions re-explained. Same gotchas re-learned.
It felt like working with a brilliant colleague who gets amnesia every morning.
So we built a persistent memory layer specifically for AI coding agents. Here's how it works:
1. Install with one curl command (30 seconds, single binary, no dependencies)
2. Run `cxp init` - it scans your past sessions and extracts engineering insights using an LLM
3. Your agent automatically loads relevant context via MCP at session start
What it remembers isn't conversation summaries - it's actionable engineering knowledge:
→ Bugs & root causes ("tokio panics on block_on in async context")
→ Fixes & solutions ("Use #[tokio::main] instead of manual Runtime::new()")
→ Design decisions ("Chose libsql over rusqlite for Turso compatibility")
→ Gotchas ("macOS keychain blocks in MCP subprocess context")
It works with Claude Code (zero config), Cursor, Windsurf, and Kiro. Local-first and privacy-first - raw transcripts never leave your machine, only extracted insights sync when you opt in.
The team memory feature is what we are most excited about: push insights to a shared pool, and everyone on the team pulls the collective knowledge. Your teammate debugged something last week? Your agent already knows.
Free and open source for local use. $7.99/mo for team sync.
We'd love to hear: what's the most frustrating thing you keep re-explaining to your AI coding agent? And if you try it - what insights does it extract from your sessions?
GitHub: https://github.com/syv-labs/cxp