Anthropic shipped Dreaming on May 6, a research preview inside Claude Managed Agents. The pitch is blunt: stop letting your agents repeat the same mistake forever.
What Dreaming actually does
Between live sessions, the agent goes async and chews through its own past — transcripts plus the memory store. It pulls out patterns that survive across runs: errors it keeps making, workflows multiple agents converge on, the team’s quiet preferences. Then it merges duplicates, retires stale entries, and emits a new memory layer for humans to approve. Nightly memory garbage collection, except the collector also writes the new playbook.
Harvey, the legal AI shop, plugged it in and saw task completion jump roughly 6x. That’s not a vibes number. That’s the kind of delta that makes other agent companies nervous.
How developers plug in
Dreaming sits inside Claude Managed Agents. Point it at your memory store and a batch of historical transcripts; Claude returns a reorganized memory layer the team reviews and ships. Useful if you’ve got months of agent runs and nobody wants to hand-write the lessons learned by hand.
This is Anthropic’s most concrete answer yet to the self-improving agent question. Everyone talks continual learning. Anthropic just made it a product surface developers can call.
You Might Also Like
- Agent Action Protocol aap the Missing Layer Above mcp That Actually Makes Agents Production Ready
- Anthropic Just Launched Code Review in Claude Code and 54 of prs now get Real Feedback
- Agent Builder by Thesys When ai Agents Stop Talking and Start Showing
- Google A2ui Agent to User Interface Finally a Standard way for ai Agents to Show you Things
- Claude Import Memory Anthropics Sneaky Brilliant Move to Poach Your Chatgpt Brain

Leave a comment