Top AI Product

We track trending AI tools across Product Hunt, Hacker News, GitHub, and more  — then write honest, opinionated takes on the ones that actually matter. No press releases, no sponsored content. Just real picks, published daily.  Subscribe to stay ahead without drowning in hype.


Clinejection: How a Single GitHub Issue Title Pwned 4,000 Developer Machines

So here’s a wild one. A security researcher named Adnan Khan found that you could compromise thousands of developer machines just by writing a cleverly worded GitHub issue title. No zero-day exploit, no sophisticated malware dropper — just a sentence that tricked an AI bot into doing your bidding. The attack, now known as [Clinejection](https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another), is one of the most creative supply chain attacks I’ve seen in a while.

Here’s how it worked. Cline, the popular open-source AI coding assistant with over 5 million users, had set up an AI-powered issue triage bot using Anthropic’s Claude. The problem? The bot pulled the issue title directly into its prompt without any sanitization. So on January 28, an attacker opened an issue with a title that looked like a performance report but actually contained hidden instructions. The AI bot dutifully followed them, poisoning the GitHub Actions cache, which then contaminated the nightly release pipeline. The end result: an unauthorized `cline@2.3.0` got published to npm with a sneaky postinstall script that globally installed [OpenClaw](https://adnanthekhan.com/posts/clinejection/), a separate AI agent capable of executing shell commands and reading credentials.

About 4,000 developers got hit during the roughly eight-hour window before the Cline team caught it and pulled the package. Not catastrophic in the grand scheme, but the implications are terrifying. One AI tool silently bootstrapping another AI agent on your machine, one that survives reboots and has file system access? That’s a new kind of nightmare.

The story [blew up on Hacker News](https://news.ycombinator.com/item?id=47263595) with over 348 points and sparked a heated discussion about whether we’re moving too fast with AI-powered dev tooling. [Snyk wrote a solid breakdown](https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/) of the technical chain too, if you want the gory details. The Grith.ai team, who [documented the full attack](https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another), also followed up with a broader [security audit of seven open-source AI agents](https://grith.ai/blog/security-audit-seven-ai-agents) — spoiler: the results aren’t reassuring.

What really gets me about Clinejection is how it chains together boring, well-understood vulnerabilities — prompt injection, cache poisoning, overly permissive CI tokens — into something genuinely dangerous. None of these issues are new individually. But when you hand an AI bot unsanitized input and give it access to your build pipeline, you’re basically rolling out the red carpet. If you’re running AI bots in your CI/CD, take a hard look at what permissions they actually have. This won’t be the last time someone tries this.


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment