Most agents are either amnesiacs or "hoarders" that choke on stale context and break their own reasoning. YourMemory brings biological logic to the workflow. Using the Ebbinghaus curve, it prunes the junk so only the important stuff sticks. -84% Token Waste: Leaner context, sharper reasoning. 52% Recall: (LoCoMo benchmarked). v1.3.0 Graph Engine: Finds what you forgot to ask for. 100% Local.
Hey everyone, I’m Sachit. I built YourMemory because I hit a wall with my own coding workflow. My agents were brilliant, but their memory was a mess. They either forgot my architectural 'gotchas' by lunch, or they got so bogged down in stale bug fixes from last week that they started hallucinating.
I realized we don't need a digital filing cabinet for our agents, we need a filter. YourMemory treats context as a living thing. It uses 'biological decay' to let transient noise fade away while reinforcing the patterns and facts you actually use.
For the skeptics: I’ve provided the full benchmarking scripts and the LoCoMo dataset on GitHub. We’re hitting 52% Recall@5, which nearly doubles the industry average. Why? Because our v1.3.0 Graph Engine doesn't just do keyword matching, it pulls in related architectural 'neighbors' that standard vector search completely misses.
It’s 100% local first (DuckDB), zero infra, and it’s finally stopped me from repeating myself to my terminal.
I’d love to hear how you’re all handling context amnesia right now. Tell me what your agent keeps forgetting that drives you the most crazy!"
Self-pruning memory is the right instinct — stale context is what makes most "personalized" AI apps lose their edge over time. I ran into this when building DishRoll (https://dishroll.netlify.app/), a weekly AI meal planner — old preferences (the chicken recipe you loved three months ago that's now boring) kept contaminating suggestions until we added explicit decay and recency weighting. MCP-level memory hygiene is a much cleaner place to solve this than at the app layer. What signals do you use to decide what gets pruned vs. kept?
Using the Ebbinghaus curve for context decay is a genuinely clever framing. The biggest failure mode I have seen with agent memory is not forgetting too much but forgetting the wrong things. Architectural decisions that were made months ago and rarely referenced can still be load-bearing. Does the graph engine help protect those kinds of low-frequency but high-importance memories from decay?
Ebbinghaus decay plus a graph engine is a clever combination, but the edge case I keep running into is old
architectural decisions that are still load bearing. A design choice made six months ago that constrains current code
is rarely in active use, so the forgetting curve would drop it, but the graph edges to it are exactly what new code
needs. How does the engine decide when decay wins versus when the graph pulls something back into scope?
About YourMemory on Product Hunt
“Cut token waste by 84% with self pruning MCP memory”
YourMemory launched on Product Hunt on April 21st, 2026 and earned 89 upvotes and 7 comments, placing #20 on the daily leaderboard. Most agents are either amnesiacs or "hoarders" that choke on stale context and break their own reasoning. YourMemory brings biological logic to the workflow. Using the Ebbinghaus curve, it prunes the junk so only the important stuff sticks. -84% Token Waste: Leaner context, sharper reasoning. 52% Recall: (LoCoMo benchmarked). v1.3.0 Graph Engine: Finds what you forgot to ask for. 100% Local.
YourMemory was featured in Open Source (68.4k followers), Storage (7.2k followers), Artificial Intelligence (467.3k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 122.1k products, making this a competitive space to launch in.
Who hunted YourMemory?
YourMemory was hunted by sachit mishra. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how YourMemory stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey everyone, I’m Sachit.
I built YourMemory because I hit a wall with my own coding workflow. My agents were brilliant, but their memory was a mess. They either forgot my architectural 'gotchas' by lunch, or they got so bogged down in stale bug fixes from last week that they started hallucinating.
I realized we don't need a digital filing cabinet for our agents, we need a filter.
YourMemory treats context as a living thing. It uses 'biological decay' to let transient noise fade away while reinforcing the patterns and facts you actually use.
For the skeptics:
I’ve provided the full benchmarking scripts and the LoCoMo dataset on GitHub. We’re hitting 52% Recall@5, which nearly doubles the industry average. Why? Because our v1.3.0 Graph Engine doesn't just do keyword matching, it pulls in related architectural 'neighbors' that standard vector search completely misses.
It’s 100% local first (DuckDB), zero infra, and it’s finally stopped me from repeating myself to my terminal.
I’d love to hear how you’re all handling context amnesia right now. Tell me what your agent keeps forgetting that drives you the most crazy!"