Top AI Product

We track trending AI tools across Product Hunt, Hacker News, GitHub, and more  — then write honest, opinionated takes on the ones that actually matter. No press releases, no sponsored content. Just real picks, published daily.  Subscribe to stay ahead without drowning in hype.


Your Anonymous Posts Aren’t Anonymous Anymore — Inside the LLM 大规模去匿名化研究

So here’s something that should make you uncomfortable: a group of researchers just proved that LLMs can figure out who you are from your “anonymous” online posts, and they can do it at scale for about four bucks per person.

The paper, [“Large-scale online deanonymization with LLMs”](https://arxiv.org/abs/2602.16800), comes from [MATS Research](https://www.matsprogram.org/research/large-scale-online-deanonymization-with-llms) — authored by Simon Lermen, Daniel Paleka, Joshua Swanson, Michael Aerni, Nicholas Carlini, and Florian Tramèr. The core idea is straightforward but terrifying: they built an agent pipeline that reads your posts, extracts identity clues (location hints, job references, niche interests), searches for matching profiles using embeddings, and then has the LLM reason through the top candidates to confirm a match. No hand-crafted features, no carefully aligned datasets. Just raw text in, real identity out.

The numbers speak for themselves. Matching Hacker News users to their LinkedIn profiles? 67% success rate at 90% precision — meaning when the system says it found you, it’s almost certainly right. Cross-community Reddit matching hit 99% precision. And the best non-LLM baseline? Basically 0%. The entire set of experiments cost under $2,000 to run.

This blew up in early March. It sat at the top of [Hacker News](https://news.ycombinator.com/) on March 8th and sparked a massive thread. Bruce Schneier [wrote about it](https://www.schneier.com/blog/archives/2026/03/llm-assisted-deanonymization.html) on March 3rd, pointing out that while targeted deanonymization was always theoretically possible, the automation and cost reduction is what changes the equation. [The Register](https://www.theregister.com/2026/02/26/llms_killed_privacy_star/) and [PCWorld](https://www.pcworld.com/article/3076653/ai-could-dox-your-anonymous-posts.html) both ran pieces, and it got heavy discussion on LessWrong too.

What makes this genuinely worrying is the economics. Previous deanonymization required a skilled analyst spending hours per target. Now any organization with modest resources can run this against thousands of users. The “practical obscurity” that most pseudonymous users rely on — the assumption that nobody will bother connecting the dots — simply doesn’t hold anymore when an LLM agent can do it automatically.

The researchers are upfront about the implications: we need to rethink online privacy threat models entirely. If you’ve ever shared specific details about your job, your city, or your hobbies across different accounts, an LLM can probably connect those dots. That’s not a hypothetical anymore — it’s a demonstrated capability with published code and reproducible results.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment