Observational Memory is a SoTA memory system for AI agents - scoring 95% on LongMemEval, the highest ever recorded. It works like human memory: two background agents act as your agent's subconscious, one observing and compressing conversations, the other reflecting and reorganizing long-term memory. It extracts what matters and lets the rest fade - just like you do. Available in Mastra today - with adapters for LangChain, Vercel AI SDK, OpenCode and others coming soon.
Alex from Mastra here (OSS TypeScript AI framework), excited to announce Observational Memory! 🎉
How to get started:
Mastra - Releasing today with full support. Give your Mastra agent human-like memory here.
Other agent frameworks - Adapters for LangChain, Vercel AI SDK, and more coming soon.
Your coding agent - Plugins for OpenCode (PR), etc in the works.
What problem does Observational Memory solve?
If you've built AI agents, you know the memory problem:
RAG retrieves context every turn - but it invalidates your prompt cache, adds latency, and costs add up fast.
Compaction summarizes when context gets long - but it's lossy and irreversible. Critical details vanish mid-task. Your agent forgets what file it was working on.
Long context seems like the answer - until you see the bill and notice performance degrading at the extremes.
Every option forces a tradeoff: memory vs. cost vs. coherence - pick two!
We built Observational Memory to break that tradeoff.
The idea is deceivingly simple: agent memory should work like human memory. You don't remember every character of every file you read. You remember what happened, what you learned, what mattered. Details fade. Important things stick.
OM implements this with two background agents that run as your agent's subconscious. The Observer watches conversations and compresses them into dense, timestamped observations (6-40x token reduction). The Reflector periodically reorganizes long-term memory - combining related items, dropping what's no longer relevant.
The result: a stable, prompt-cacheable context window that scores 95% on LongMemEval - the highest ever recorded. 😱 In our research paper, we outline how we beat the "oracle" (a model given only the conversations containing the answers) - TL;DR dense observations outperform raw context.
Can't wait for you to try it out and, in the meantime, if you have any questions, drop them below and we'll answer them!
About Observational Memory by Mastra on Product Hunt
“Give your AI agents human-like memory”
Observational Memory by Mastra launched on Product Hunt on February 11th, 2026 and earned 130 upvotes and 6 comments, placing #12 on the daily leaderboard. Observational Memory is a SoTA memory system for AI agents - scoring 95% on LongMemEval, the highest ever recorded. It works like human memory: two background agents act as your agent's subconscious, one observing and compressing conversations, the other reflecting and reorganizing long-term memory. It extracts what matters and lets the rest fade - just like you do. Available in Mastra today - with adapters for LangChain, Vercel AI SDK, OpenCode and others coming soon.
On the analytics side, Observational Memory by Mastra competes within Software Engineering, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how Observational Memory by Mastra performed against the three products that launched closest to it on the same day.
Who hunted Observational Memory by Mastra?
Observational Memory by Mastra was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Reviews
Observational Memory by Mastra has received 4 reviews on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.
For a complete overview of Observational Memory by Mastra including community comment highlights and product details, visit the product overview.
Alex from Mastra here (OSS TypeScript AI framework), excited to announce Observational Memory! 🎉
How to get started:
Mastra - Releasing today with full support. Give your Mastra agent human-like memory here.
Other agent frameworks - Adapters for LangChain, Vercel AI SDK, and more coming soon.
Your coding agent - Plugins for OpenCode (PR), etc in the works.
What problem does Observational Memory solve?
If you've built AI agents, you know the memory problem:
RAG retrieves context every turn - but it invalidates your prompt cache, adds latency, and costs add up fast.
Compaction summarizes when context gets long - but it's lossy and irreversible. Critical details vanish mid-task. Your agent forgets what file it was working on.
Long context seems like the answer - until you see the bill and notice performance degrading at the extremes.
Every option forces a tradeoff: memory vs. cost vs. coherence - pick two!
We built Observational Memory to break that tradeoff.
The idea is deceivingly simple: agent memory should work like human memory. You don't remember every character of every file you read. You remember what happened, what you learned, what mattered. Details fade. Important things stick.
OM implements this with two background agents that run as your agent's subconscious. The Observer watches conversations and compresses them into dense, timestamped observations (6-40x token reduction). The Reflector periodically reorganizes long-term memory - combining related items, dropping what's no longer relevant.
The result: a stable, prompt-cacheable context window that scores 95% on LongMemEval - the highest ever recorded. 😱 In our research paper, we outline how we beat the "oracle" (a model given only the conversations containing the answers) - TL;DR dense observations outperform raw context.
Can't wait for you to try it out and, in the meantime, if you have any questions, drop them below and we'll answer them!