agent
-
OpenAI Desktop Superapp: ChatGPT, Codex, and Atlas Are Becoming One App
OpenAI has too many apps — and it knows it. On March 19, Fidji Simo, OpenAI’s CEO of Applications, told employees in an internal note that the company plans to merge ChatGPT, Codex, and the Atlas browser into a single desktop application. “We realized we were spreading our efforts across too many apps and stacks,… Continue reading
-
OpenDataLoader PDF Scores 0.90 Accuracy Across 200 Real-World PDFs — Highest Among Open-Source Parsers
Every team building a RAG pipeline hits the same wall: PDFs. The format was designed for consistent visual rendering, not for machines to extract structured data. Tables break. Reading order scrambles. Headers vanish. And suddenly your AI knowledge base is hallucinating because the ingestion layer fed it garbage. OpenDataLoader PDF, an open-source project by South… Continue reading
-
Stripe, Coinbase, and Ramp Built Internal Coding Agents — LangChain Open SWE Gives You the Same Architecture for Free
Stripe has Minions. Coinbase built Cloudbot. Ramp developed Inspect. Three of the most engineering-driven companies on the planet independently arrived at strikingly similar architectures for their internal AI coding agents — isolated cloud sandboxes, curated toolsets, subagent orchestration, and deep integration into existing developer workflows. The problem: all three are proprietary and locked behind company… Continue reading
-
Get Shit Done (GSD) Hits 35K GitHub Stars — a Music Producer’s Fix for AI’s Context Rot Problem
AI coding agents are powerful — until they aren’t. Anyone who has used Claude Code, Codex, or Gemini CLI on a project longer than a few files knows the pattern: the agent starts sharp, produces clean code, follows instructions precisely. Then, about 60% into the context window, quality starts to slip. Variable names get reused.… Continue reading
-
Cursor Composer 2 takes on Anthropic and OpenAI with a $0.50/M token coding model — and the benchmarks back it up
For the past two years, AI coding tools have lived and died by the models underneath them. Cursor rode Claude. GitHub Copilot ran on OpenAI. Windsurf mixed and matched. Everyone was a reseller with a nice UI on top. That dynamic shifted on March 19, 2026, when Cursor unveiled Composer 2 — a proprietary, code-only… Continue reading
-
OpenAI Acquires Astral — and Now Controls the Tools 81K Python Developers Depend On
OpenAI just made its boldest move in the developer tools war. On March 19, the company announced it will acquire Astral, the startup behind uv (81,000+ GitHub stars), Ruff (46,000+ stars), and ty (17,000+ stars) — three Rust-powered tools that have become essential plumbing for modern Python development. The Astral team will fold into OpenAI’s… Continue reading
-
Lightfield: The $300M Startup That Killed Its Own 25M-User Product to Build a CRM
Most founders would kill for 25 million users. Keith Peiris and Henri Liriani already had them — and walked away. Their company Tome, the AI-powered presentation tool backed by $81 million from Coatue, Greylock, Lightspeed, and GV at a $300 million valuation, was one of the fastest productivity tools to reach a million users. It… Continue reading
-
Fal AI’s Revenue Doubled to $400M in 6 Months — and an $8B Fundraise Is on the Table
Three years ago, Fal was a small startup building machine learning pipelines for fraud detection. Today, it’s one of the fastest-growing AI infrastructure companies in the world, reportedly pulling in $400 million in annualized revenue and in talks to raise up to $350 million at a valuation that would nearly double what it commanded just… Continue reading
-
MiniMax M2.7 Scores 56% on SWE-Pro — and It Helped Build Itself
An AI model that writes its own training code, debugs its own failures, and decides whether to keep or revert its own changes. That’s what MiniMax claims M2.7 actually does. Released on March 18, 2026, this is the Shanghai-based company’s follow-up to M2.5, and it introduces something the industry hasn’t seen before at this scale:… Continue reading
-
Two Consumer GPUs, One Evening, and a 245% Reasoning Boost: How LLM Circuit Finder Works
Most approaches to improving LLM reasoning involve expensive fine-tuning, synthetic data pipelines, or reinforcement learning loops that eat GPU-weeks. LLM Circuit Finder throws all of that out. Instead, it copies three specific transformer layers, pastes them back into the forward pass, and watches logical deduction scores jump from 0.22 to 0.76 on Big-Bench Hard. No… Continue reading
