Coding agents forget. Close Cursor, reopen tomorrow, re-explain the project. agentmemory fixes that — a TypeScript memory layer climbing GitHub trending fast: +518 stars yesterday, sitting at 3,196.
What it actually is
A persistent memory server that drops into Claude Code, Cursor, Gemini CLI, OpenCode, or any MCP client. All agents share one memory. It extends Karpathy’s LLM Wiki pattern with confidence scoring, lifecycle management, knowledge graphs, and hybrid BM25+vector search. Repo claims 92% token reduction vs re-pasting full context every session.
The benchmark gap
On LongMemEval-S (500 questions, ~115K tokens each), agentmemory hits 95.2% retrieval recall. Mem0 — the incumbent here — reports 68.5% on LoCoMo. Different benchmarks, but the spread is real. Adding vectors on top of BM25 alone gave +9 points (86.2% → 95.2%), the single biggest lift in their ablation.
API surface
Ships as an MCP server with 43 tools — memory_recall, memory_save, memory_smart_search, memory_timeline, memory_profile, and the rest. 12 auto-capture hooks mean zero manual memory.add() calls. Drop it in once, every agent in your stack inherits the memory.
Cross-session memory is becoming the actual moat for coding agents in 2026, and agentmemory is the one benchmarking ahead of the field right now.
You Might Also Like
- 27k Github Stars in Weeks Learn Claude Code by Shareai lab Breaks Down ai Coding Agents Into 12 Lessons
- Openviking Treats ai Agent Memory Like a File System and 9k Github Stars say its Working
- 27 Agents 109 Skills 88k Github Stars is Everything Claude Code Genius or Over Engineering
- Addy Osmani Open Sources Agent Skills 19 Workflows That Make ai Agents Code Like Google Engineers
- Openai Codex in Chrome Moves the Coding Agent Into Your Real Browser Session

Leave a comment