The AI agent memory problem has become one of the most debated topics in developer tooling. Every major coding assistant — Claude Code, Cursor, Windsurf, OpenAI Codex — resets between sessions. Your agent forgets what it learned yesterday, what conventions your project follows, and what mistakes it already made. Dozens of startups and frameworks are racing to solve this with vector databases, MCP servers, and complex orchestration layers.
Agent Kernel takes the opposite approach: three Markdown files and a Git repo. That’s it.
The project hit the Hacker News front page on March 23, 2026, sparking a sharp debate about whether radical simplicity is genius or naive when it comes to AI agent state management.
How Agent Kernel Actually Works
Created by developer Oguz Bilgic, Agent Kernel is not a framework, SDK, or service. It’s a Git repository containing three core files:
- AGENTS.md — The kernel itself. Contains the rules and instructions that teach the agent how to manage its own state. Users are told not to edit this file.
- IDENTITY.md — Maintained by the agent. After the first conversation, the agent writes down who it is, what project it’s working on, and what its role should be.
- KNOWLEDGE.md — An index file pointing to entries in the knowledge directory.
Beyond these three files, two directories handle the actual memory:
- knowledge/ — Mutable state. Facts about the current state of things — architecture decisions, API conventions, deployment configs. The agent updates these whenever reality changes.
- notes/ — Append-only session logs. Each day gets a narrative entry recording what the agent did, what decisions were made, and what problems came up. Once a day ends, these entries are immutable.
Getting started is a single command:
git clone https://github.com/oguzbilgic/agent-kernel.git my-agent
cd my-agent
Then point any compatible coding agent at the directory. The agent reads AGENTS.md, realizes it has no identity yet, initiates a setup conversation, and starts building its own memory from there.
The project is MIT-licensed, has 89 GitHub stars, and consists of just 10 commits. Minimalism is clearly the point.
The Markdown-for-Agents Movement Is Bigger Than One Project
Agent Kernel didn’t emerge in a vacuum. It’s part of a broader wave that’s been building throughout early 2026: the idea that Markdown files — not databases, not MCP servers — should be the primary home for AI agent intelligence.
The New Stack published a piece titled “The case for running AI agents on Markdown files instead of MCP servers,” examining how projects like CompanyOS connect to eight MCP servers for API access but keep all decision logic in Markdown skill files. Sentry’s David Cramer was quoted saying “many MCP servers don’t need to exist” because they’re poor API wrappers that a skill file could replace.
AGENTS.md, originally popularized by OpenAI’s Codex CLI, is now stewarded by the Agentic AI Foundation under the Linux Foundation, with adoption across 60,000+ open-source projects. Microsoft shipped a .NET Skills Executor that orchestrates SKILL.md files. Supabase open-sourced an agent-skills repository separating development practices from API interactions. And GitAgent, covered by MarkTechPost on March 22, introduced a framework-agnostic standard where agents are defined by folders of Markdown files like SOUL.md and DUTIES.md.
The pattern is clear: plain text files that humans can read and edit are winning over opaque databases for agent configuration. Agent Kernel pushes this idea to its logical extreme — not just configuration, but the agent’s entire working memory lives in Markdown.
If you’re interested in how different agent memory approaches compare, our coverage of Claude-mem explored a similar problem from Anthropic’s perspective. And for context on the broader agent skills ecosystem, check out our articles on Agent Skills Framework and skills.sh.
The Hacker News Debate: Brilliant Minimalism or Dangerous Naivety?
The Show HN post pulled 31 points and 12 comments — modest numbers, but the discussion quality was high and revealed real tensions in how developers think about agent memory.
The reliability problem. User bigbezet argued that “agents will not always reliably follow instructions in the AGENTS.md file, especially as context size increases.” As conversations grow longer, the kernel’s instructions get pushed further up the context window and may be deprioritized. Their recommendation: use programmatic hooks instead of relying on the agent to self-regulate.
Context window scaling. User aiboost raised a practical concern that “reading past daily logs will eat up the context window” as notes accumulate over weeks or months. This is a real constraint — even with 200K+ context windows, daily narrative logs from weeks of work could consume significant tokens just to load.
Bias amplification. Perhaps the most striking criticism came from chrisdudek, who shared real-world experience: agents that maintain journals develop one-sided perspectives from complaint-heavy session logs, becoming “yes-men that have perfect memory.” If most logged interactions involve debugging frustrations, the agent develops a skewed worldview.
Trauma replay. User gaigalas reported agents getting stuck replaying difficult debugging sessions rather than moving past them, suggesting that “it’s better to not have stateful stuff when working with agents” in some cases.
On the positive side, sathish316 saw clear value in creating specialized agent copies with persistent memory for specific tasks like email management. And avereveard offered a related but even simpler approach: maintain only project.md and TODO.md with clear task procedures.
These aren’t theoretical concerns. They represent the lived experience of developers who have tried giving agents long-term memory and hit unexpected failure modes.
Agent Kernel vs. the Competition
The agent memory space is getting crowded. Here’s how Agent Kernel stacks up:
| Approach | Complexity | Portability | Human-Readable | Persistence |
|---|---|---|---|---|
| Agent Kernel | 3 files + Git | Any agent | Fully transparent | Git commits |
| Claude Code built-in memory | Zero setup | Claude Code only | Partially (CLAUDE.md) | Local files |
| GitAgent | YAML + Markdown folders | Framework-agnostic | Yes | Git-native |
| Vector DB solutions | Database + embeddings | Varies | No | Database |
| MCP-based memory | Server infrastructure | MCP-compatible | No | Server-dependent |
Agent Kernel’s strongest advantage is portability. The same memory repository works whether you’re using OpenAI Codex, Claude Code, Cursor, or Windsurf. Switch tools, keep the memory. That’s a genuine differentiator in a market where most memory solutions lock you into one ecosystem.
Its biggest weakness is the scaling problem the HN commenters identified. Without any retrieval mechanism beyond “read the files,” Agent Kernel relies entirely on the agent’s ability to parse growing Markdown files within its context window. Projects like memsearch from Zilliz address this by adding BM25 and hybrid vector search over Markdown-based memory — but that adds exactly the kind of infrastructure Agent Kernel tries to avoid.
For projects that need to coordinate between MCP servers and agent state, Agent Kernel may be too minimal. But for individual developers who want their coding agent to remember things between sessions without setting up any infrastructure, the “just clone and go” approach is hard to beat.
Who Should Actually Use This
Agent Kernel makes the most sense for:
- Solo developers who use one AI coding agent regularly and want continuity between sessions
- Teams experimenting with agent memory who want a zero-infrastructure starting point
- Anyone running multiple agents who wants each to have a distinct, persistent identity (Agent Kernel’s “same kernel, different identity” model supports this well)
It’s probably not the right fit for enterprise teams needing structured memory across dozens of agents, or for projects where the knowledge base will grow beyond what fits comfortably in a context window.
FAQ
Is Agent Kernel free?
Yes. It’s MIT-licensed and open-source. There’s no paid tier, no hosted service, no account required. It’s literally a Git repo you clone.
Which AI coding tools does Agent Kernel support?
It works with Claude Code, OpenAI Codex, OpenCode, Cursor, Windsurf, and any other AI coding agent that reads Markdown configuration files. The key requirement is that the agent can read and write files in its working directory.
How does Agent Kernel compare to CLAUDE.md or AGENTS.md?
CLAUDE.md and AGENTS.md are static instruction files — they tell the agent how to behave but don’t provide a mechanism for the agent to build and maintain its own memory over time. Agent Kernel uses AGENTS.md as its instruction layer but adds the knowledge/ and notes/ directories as a dynamic, agent-maintained memory system.
What happens when the memory files get too large?
This is the project’s biggest open question. Currently, there’s no built-in retrieval or summarization mechanism. As the knowledge and notes directories grow, they’ll consume more of the agent’s context window. For long-running projects, you may need to manually prune or summarize older entries.
Can I use Agent Kernel for non-coding tasks?
In theory, yes. The kernel is domain-agnostic — the identity and knowledge system could work for any agent task. But it’s designed with coding workflows in mind, and the supported tools are all coding agents.
You Might Also Like
- Google A2ui Agent to User Interface Finally a Standard way for ai Agents to Show you Things
- Agent Action Protocol aap the Missing Layer Above mcp That Actually Makes Agents Production Ready
- Mcp2cli the Tool That Cuts mcp Token Costs by 99 Just hit Hacker News
- Can 3 Files Solve ais Agent Portability Problem Gitagent Thinks so
- 27k Github Stars in Weeks Learn Claude Code by Shareai lab Breaks Down ai Coding Agents Into 12 Lessons

Leave a comment