So here’s a wild one. Grammarly — which now operates under the Superhuman brand after a rebrand in late 2025 — rolled out a feature called “Expert Review” that lets you pick a real-world scholar or writer to “review” your manuscript. Sounds cool in theory, right? Except they forgot one tiny detail: actually asking those experts for permission.
The whole thing blew up when [Verena Krebs](https://cybernews.com/ai-news/grammarly-expert-review-dead-scholars/), a medieval historian at Ruhr University Bochum, posted a screenshot showing that one of the available “experts” was David Abulafia — a prominent historian who passed away in January 2026. Let that sink in for a second. A dead scholar, listed as an available reviewer, with zero ability to consent or object to his name being used this way.
The feature itself launched back in August 2025. It uses what’s known as persona prompting — basically feeding an AI a scholar’s published works and telling it to respond as if it were that person. Grammarly markets it as a way to “meet the expectations of your discipline” by drawing on insights from subject matter experts. The problem is, [those experts had no idea](https://techcrunch.com/2026/03/07/grammarlys-expert-review-is-just-missing-the-actual-experts/). No opt-in, no partnership, nothing. Just their names and reputations slapped onto an AI product.
It gets worse. Grammarly also introduced an AI grading agent that tries to predict how a specific teacher would score student work. As historian C.E. Aubin [told Wired](https://www.wired.com), “These are not expert reviews, because there are no ‘experts’ involved in producing them.” The [Chronicle of Higher Education](https://www.chronicle.com/article/when-youre-an-expert-reviewing-students-work-on-grammarly-but-you-didnt-know-it) ran a piece with a headline that pretty much says it all: “When You’re an ‘Expert’ Reviewing Students’ Work on Grammarly But Didn’t Know It.”
The backlash has been massive. Between March 7-9, at least two related posts hit the [Hacker News](https://news.ycombinator.com) front page, pulling in dozens of points each. [TechCrunch](https://techcrunch.com/2026/03/07/grammarlys-expert-review-is-just-missing-the-actual-experts/), [Futurism](https://futurism.com/artificial-intelligence/grammarly-ai-reviews), [Decrypt](https://decrypt.co/360277/obscene-grammarly-ai-tool-writing-feedback-dead-scholars), and plenty of others all piled on. Grammarly’s VP of marketing defended the feature by saying these experts’ works are “publicly available and widely cited” — which, sure, but publicly available doesn’t mean free to commercially exploit without so much as a heads-up.
This whole episode really captures something uncomfortable about where AI tools are heading. It’s one thing to build on public knowledge. It’s another to put a specific person’s name on AI-generated output they never wrote and never endorsed. That’s not innovation — that’s just borrowing someone’s reputation without asking.

Leave a comment