The “AI will write malware someday” debate is over. On May 11, Google’s Threat Intelligence Group confirmed the first documented case of criminals using an LLM to find AND weaponize a working zero-day — a 2FA bypass in a widely-used open-source web admin platform. Not a research demo. A live attack, aimed at mass exploitation.
How Google Knew It Was AI
GTIG attributed the exploit to AI with “high confidence” by reading the Python code itself. The giveaways: a hallucinated CVSS score, suspiciously polished educational docstrings, and the textbook Pythonic style you only see in LLM training data. No human pentester writes like that — humans write ugly, fast, and inconsistent. The attackers were scaling for mass exploitation. Google quietly tipped off the platform’s maintainer before the wave landed.
Why It Matters Now
This is criminals shipping AI-discovered vulns into production attacks, not a paper at Black Hat. John Hultquist, GTIG’s chief analyst, said it cleanly: “Anyone still treating AI-assisted vulnerability discovery as a future problem is already behind.” Pair this with Frame Security’s $50M Series A the same week and the shape of 2026 cybersec is obvious — AI on offense, AI on defense, human defenders stuck in the middle hoping their stack is faster than the attacker’s.
You Might Also Like
- Claude Code Security Just Dropped and it Already Found 500 Zero Days Nobody Knew About
- Rfc 406i Rags Rejection of Artificially Generated Slop the Open Source Communitys Copy Paste Defense Against ai Spam prs
- Hume ai Open Sources Tada an llm Based tts With Zero Hallucinations and 0 09 rtf
- Tutor Intelligence Data Factory 1 100 Robots Generate 10000 Hours of Training Data per Week
- Title Faraday Future eai Brain Open Developer Platform ff Futurist Ships sdk for Vibe Coding Humanoids

Leave a comment