OpenAI shipped GPT-5.5-Cyber on May 7, 2026 — a fork of GPT-5.5 with the cybersecurity guardrails dialed back. Vetted defenders can have it write proof-of-concept exploits, run attack simulations, and validate vulnerabilities — work that gets a polite refusal in standard ChatGPT.
How the split works
Two tracks, one model family. Standard GPT-5.5 stays the way it was — log triage, detection rules, defensive code review for the broad cybersecurity audience. Cyber goes only to a small set of identity-verified customers: enterprise red teams, pentest firms, vendor security groups. Access is manual through the Trusted Access for Cyber program, revocable, and from June 1 requires phishing-resistant authentication. No paid upgrade path.
Why it matters
This is OpenAI’s direct response to Anthropic’s Claude Mythos, the model rumored to be hunting zero-days. Anthropic went on offense; OpenAI is arming the defense at the same capability tier. Approved teams hit the model through the standard OpenAI API — same SDK, fewer refusals. Typical workloads: fuzzing harness generation, exploit-chain reasoning, payload drafting, binary reverse engineering, detection rules from raw exploit code. First time a frontier lab has formalized a permissive-cyber tier, and almost certainly not the last.
You Might Also Like
- 13b Into Openai yet Microsoft Copilot Cowork Runs on Anthropic Claude
- Openai Trusted Access for Cyber Opens gpt 5 5 to Offensive Security Work for Verified Defenders Only
- Gpt oss 120b Openai Finally Goes Open Source and its Worth the Wait
- Anthropic Just Launched Code Review in Claude Code and 54 of prs now get Real Feedback
- Claude Replay Turns Your Anthropic Claude Code Sessions Into Shareable Video Like Replays

Leave a comment