Most founders hold on to revenue like a lifeline. Ivan Burazin threw his away.
In mid-2025, Burazin and his team at Daytona were running a respectable open-source dev environment manager — standardized workspaces, one-command setup, an enterprise-grade Codespaces alternative. It was growing. It had paying customers. It had $300K in annual recurring revenue.
Then they scrapped it. Daytona 2.0 was rebuilt from the ground up — not for human developers, but for AI agents. The bet: every autonomous coding agent needs its own isolated computer, and nobody was building that infrastructure well enough.
Three months later, Daytona crossed $1 million ARR. Six weeks after that, it doubled. In February 2026, FirstMark Capital led a $24 million Series A, with strategic checks from Datadog and Figma Ventures. The GitHub repo has climbed past 68,000 stars and keeps trending.
The pivot didn’t just work. It worked absurdly fast.
Why AI Agents Need Their Own Computers
Here’s the core problem. When an AI coding agent — whether it’s LangChain’s autonomous coder, a Devin-style assistant, or a custom agent built on Claude or GPT — generates and executes code, that code needs to run somewhere. And “somewhere” can’t be your production server, your local machine, or a shared container where one agent’s rm -rf takes down another’s work.
AI agents need isolated, ephemeral environments that spin up instantly, run arbitrary code safely, and disappear when done. They need to install dependencies, write files, run tests, access networks (or not), and iterate — all without human babysitting.
This is what Daytona now provides: programmatic, composable sandboxes where CPU, memory, storage, GPU, and networking are configurable on demand. Each sandbox is essentially a full computer that an AI agent controls via SDK. Start it, pause it, snapshot it, kill it — all through API calls.
The numbers back up the “instant” claim. Daytona reports sandbox creation times between 27 and 90 milliseconds. For context, that’s faster than the blink of an eye (which takes about 300ms). Cold starts under 100ms mean an agent can spin up a fresh environment mid-conversation without the user noticing any delay.
The Pivot That Shouldn’t Have Worked (But Did)
Burazin’s background makes the pivot less surprising in hindsight. He co-founded Codeanywhere back in 2009 — one of the first browser-based IDEs — and later built the Shift developer conference into a major European tech event. After a stint as Chief Developer Experience Officer at Infobip, he launched Daytona to tackle dev environment standardization.
But the original Daytona was solving a mature, somewhat commoditized problem. GitHub Codespaces, Gitpod, and Coder already occupied the space. The AI agent infrastructure space, by contrast, was wide open and growing exponentially.
The timing aligned with a broader industry shift. As coding agents graduated from demo toys to production tools in late 2025, the question moved from “can AI write code?” to “where does AI-written code actually run?” Companies building agents suddenly needed sandbox infrastructure they didn’t want to build themselves.
Daytona’s customer list reflects this pull: LangChain uses Daytona sandboxes as the backbone of its production coding agent. Turing, Writer, and SambaNova are onboard. Multiple Fortune 100 companies are in the mix. The adoption curve isn’t driven by marketing — it’s driven by a gap that needed filling.
How Daytona Stacks Up Against E2B, Modal, and the Competition
Daytona isn’t the only player in the AI sandbox space, and the differences between options matter depending on your use case.
E2B takes a microVM approach — each sandbox gets its own dedicated kernel, providing hardware-level isolation. This is more secure for running truly untrusted code but comes with slightly higher cold start times (90-150ms) and less flexibility for persistent, stateful workloads. E2B is strong for ephemeral execution: spin up, run code, tear down. Pricing starts with a free Hobby plan ($100 in one-time credits) and a $150/month Pro plan, with usage billed per second at roughly $0.05/hour for a 1 vCPU sandbox.
Modal uses gVisor-based isolation with a deny-by-default network posture. It shines for GPU workloads and Python-heavy pipelines. If your agents need to run ML inference or training as part of their workflow, Modal is worth a look. But its sandbox feature set is narrower than Daytona’s for general-purpose agent computing.
Daytona sits in a different spot. It uses Docker containers (shared kernel, faster startup) and focuses on persistent, stateful environments where agents build up context over time. An agent can install dependencies, modify files, and iterate across multiple sessions without starting from scratch. The SDKs cover Python, TypeScript, Ruby, and Go — broader language support than most competitors. And the Computer Use capability (browser and desktop automation from within a sandbox) is a feature that E2B and Modal don’t match.
On pricing, Daytona uses pure usage-based billing with no subscription tiers gating features. A 1 vCPU / 1 GiB sandbox runs about $0.067/hour. New signups get $200 in free credits, and startups can apply for up to $50,000 in credits.
| Feature | Daytona | E2B | Modal |
|---|---|---|---|
| Isolation | Docker containers | MicroVMs | gVisor |
| Cold start | 27-90ms | 90-150ms | Varies |
| Persistent state | Yes | Limited | No |
| GPU support | Yes | No | Yes |
| SDKs | Python, TS, Ruby, Go | Python, TS | Python (TS beta) |
| Computer Use | Yes | No | No |
| Free credits | $200 | $100 | $30 |
NVIDIA OpenShell Enters the Ring
Daytona’s timing got even more interesting at GTC 2026 in mid-March. NVIDIA unveiled OpenShell as part of its Agent Toolkit — an open-source runtime that enforces policy-based security guardrails for autonomous agents. OpenShell includes a sandbox component, a policy engine governing filesystem/network/process layers, and a privacy router controlling where inference data flows.
The NVIDIA announcement validated the category. When Jensen Huang is talking about giving agents secure execution environments on the GTC stage, the “every agent needs a computer” thesis stops being a startup pitch and starts being industry consensus.
But OpenShell and Daytona are playing different games. OpenShell is an enterprise governance layer — think security policies, compliance guardrails, and audit trails for agent actions across large organizations. It’s backed by partners like Adobe, Salesforce, SAP, and ServiceNow. Daytona is the raw compute primitive — the actual sandbox where code runs, files get written, and processes execute.
In practice, they’re more complementary than competitive. An enterprise might use OpenShell’s policy engine to govern what agents are allowed to do, while those agents actually execute code inside Daytona sandboxes. The NVIDIA announcement is a rising tide for the entire agent infrastructure space.
What Makes Daytona Stick
Beyond raw performance metrics, a few things stand out about why Daytona is gaining traction so quickly.
The LangChain integration is first-class. Daytona ships a DaytonaDataAnalysisTool that plugs directly into LangChain pipelines, enabling agents to perform sandboxed Python data analysis with file upload/download support and multi-step workflows. For teams already building on LangChain, adding Daytona is a few lines of code, not an infrastructure project.
Spend controls are built in. One of the quiet nightmares of agent infrastructure is runaway costs — an agent stuck in a loop burning compute for hours. Daytona includes automatic resource monitoring and configurable spend limits that cap usage before it spirals.
The open-source core is real. Unlike some “open-source” companies where the free version is a skeleton, Daytona’s GitHub repo (AGPL-3.0 licensed) contains the actual infrastructure. The 68,000+ stars aren’t vanity — they represent a community that’s deploying and extending the platform. The repo has been consistently trending on GitHub through early 2026.
The team is small but experienced. At roughly 20 people, Daytona is punching well above its weight. Burazin has been building developer infrastructure for 15+ years, and the team’s ability to execute a full platform pivot while maintaining growth is notable.
The Bigger Picture for Agent Infrastructure
Daytona’s rapid ascent points to a broader structural shift in how software gets built. As AI agents move from writing code snippets to managing entire development workflows, the infrastructure layer underneath them becomes critical.
Think of it this way: the cloud computing wave needed AWS to provide standardized compute. The container wave needed Docker and Kubernetes. The AI agent wave needs its own compute primitive — isolated, programmable, instant-on environments purpose-built for non-human developers.
Daytona is betting it can be that primitive. With $24 million in the bank, revenue doubling every six weeks, and a customer list that includes some of the biggest names in AI, the bet is looking solid. The question isn’t whether agent infrastructure is a real category — NVIDIA just confirmed that at GTC. The question is whether Daytona can maintain its lead as bigger players inevitably enter.
For now, 68,000 GitHub stars and a trajectory from zero to $2M+ ARR in under five months suggest the answer is yes.
FAQ
What is Daytona and who is it for?
Daytona is an open-source platform that provides secure, elastic sandbox environments for AI agents to execute code. It’s built for developers and companies building AI coding agents, autonomous workflows, or any application where AI-generated code needs to run safely in isolation. Customers range from Y Combinator startups to Fortune 100 enterprises.
How much does Daytona cost?
Daytona uses pure usage-based pricing with no subscription fees. A sandbox with 1 vCPU and 1 GiB of RAM costs approximately $0.067 per hour, billed per second. New accounts receive $200 in free compute credits with no credit card required. Startups can apply for up to $50,000 in credits.
How does Daytona compare to E2B?
The main architectural difference is isolation: Daytona uses Docker containers (faster startup, shared kernel) while E2B uses microVMs (dedicated kernel, stronger isolation). Daytona excels at persistent, stateful agent workflows and offers broader SDK support (Python, TypeScript, Ruby, Go). E2B is better suited for ephemeral, one-shot code execution where hardware-level isolation is a priority.
Can I self-host Daytona?
Yes. Daytona’s core is open-source under the AGPL-3.0 license and available on GitHub. You can self-host the infrastructure on your own servers. The managed cloud service is also available for teams that prefer not to manage infrastructure themselves.
What programming languages does the Daytona SDK support?
Daytona provides official SDKs for Python, TypeScript, Ruby, and Go. The SDK offers programmatic control over sandbox creation, file operations, process execution, and resource management — everything an AI agent needs to operate autonomously within a sandboxed environment.
You Might Also Like
- Heretic Just hit Github Trending and the ai World has Opinions
- Pentagi Just hit 1 on Github Trending and Yeah its Worth the Hype
- Pageindex Just hit Github Trending and it Might Make you Rethink rag Entirely
- Mirofish Just hit Github Trending and its Unlike any ai Tool ive Seen
- Astrbot Crosses 22k Github Stars as Developers Flock to its 18 Platform ai Chatbot Framework

Leave a comment