Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


δ-mem boosts frozen LLMs by 31% on MemoryAgentBench with an 8×8 online memory state

δ-mem is a lightweight memory mechanism from DECLARE Lab that augments a frozen full-attention LLM with a compact online state of associative memory. The paper hit Hacker News with 216+ points this weekend. Open-source code is up at declare-lab/delta-Mem.

## The mechanism

Past information gets compressed into a fixed-size state matrix updated by delta-rule learning. During generation, the readout produces low-rank corrections to the backbone’s attention computation. The backbone weights stay frozen — δ-mem is the only learnable piece, which means you can bolt it onto an existing model without retraining.

## The numbers

With only an 8×8 online memory state — essentially free, under 1KB at full precision — δ-mem averages 1.10× over the frozen backbone and 1.15× over the strongest non-δ-mem memory baseline. On memory-heavy benchmarks the gains are larger: 1.31× on MemoryAgentBench, 1.20× on LoCoMo.

## Why it matters

Long-running agents bottleneck on context window, not raw model intelligence. Most fixes throw more tokens at the problem. δ-mem learns to compress structured memory into 64 numbers and updates those numbers online as the conversation progresses. For builders running agentic loops that span hours or days, this kind of cheap external memory primitive that works on top of frozen models is exactly the missing piece.


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment