Grammarly Expert Review Controversy: AI Identity Theft & Ethics

Grammarly Expert Review Controversy: AI Identity Theft & Ethics

Team GimmieTeam Gimmie
Published on March 7, 2026

THE IDENTITY IRONY: WHY GRAMMARLY’S LATEST MOVE IS A GHOST STORY

There is a profound, almost dark irony in a communication tool designed to help you find your voice doing so by stealing someone else’s. For years, we have looked to Grammarly as the reliable digital safety net for our prose—the silent partner that catches our typos and fixes our dangling modifiers. But with the launch of its new Expert Review feature, the company has transitioned from a helpful assistant to something far more unsettling: a digital mimic.

This feature promises to provide users with writing advice inspired by subject matter experts. On the surface, it sounds like a sophisticated upgrade. In practice, it has become a cautionary tale about the ethics of artificial intelligence. It turns out that when Grammarly says its advice is inspired by experts, it often means it is using the identities, professional reputations, and specific voices of journalists and academics without their knowledge or consent. When a tool meant to foster better communication begins to treat human identity as a mere data point to be scraped and replicated, we have officially moved past innovation and into the territory of digital identity theft.

THE VOICES IN THE MACHINE: FROM THE VERGE TO THE GRAVE

The scale of this misstep became clear through investigative reporting from The Verge and Wired. Nilay Patel, the editor-in-chief of The Verge, discovered that Grammarly’s AI was essentially impersonating him. Users seeking feedback might find their work critiqued by an AI persona that not only claimed to be inspired by Patel but mimicked his specific editorial style and professional authority. He was not alone. David Pierce, Sean Hollister, and Tom Warren—all prominent names in tech journalism—found their professional identities being offered up as feature points in a subscription package they never signed up for.

If mimicking the living is an ethical red flag, the Wired investigation revealed something even more macabre. Grammarly’s AI was also drawing on the personas of deceased professors. Imagine a student or a grieving colleague receiving writing advice curated in the style of a mentor who has recently passed away. It is a ghoulish application of technology that prioritizes the illusion of expertise over the basic human dignity of the person being replicated.

This is not just a technical glitch; it is a fundamental betrayal of the trust that users place in productivity software. When we use these tools, we assume the expertise is built on general linguistic principles and broad datasets, not on the unauthorized harvesting of specific individuals’ careers. For the journalists and academics whose names were used, this is a professional violation. For the consumer, it is a deceptive practice that sells the reputation of experts without ensuring those experts actually stand behind the advice being given.

THE PROFESSIONAL RISK: WHY THIS MATTERS TO YOU

You might think that if you aren't a high-profile editor or a tenure-track professor, this doesn't affect you. But the precedent Grammarly is setting creates a massive professional risk for every user. If a company can justify using a famous editor’s voice today, what stops them from using your voice tomorrow?

Consider the implications for your own career. If you are a manager or a specialist, your specific way of communicating is part of your professional value. If an AI can perfectly mimic your tone and offer it as a service to your subordinates or competitors, your unique professional identity is effectively commodified.

Furthermore, using these expert reviews creates a significant liability for the writer. If you follow the advice of an AI claiming to be Nilay Patel, but that advice leads to a factual error or a tone-deaf editorial choice, who is responsible? The AI isn't the editor-in-chief; it’s a simulation. By relying on these unauthorized personas, you are building your work on a foundation of intellectual property theft and hallucinated expertise. In a professional setting, that is a risk that simply isn’t worth the convenience of a polished paragraph.

THE GIFTER’S DILEMMA: THE SOCIAL COST OF UNETHICAL AI

As we navigate a world where AI subscriptions are becoming popular gifts, we have to look beyond the features list. Gifting a tool like Grammarly used to be a safe, helpful gesture for a student or a professional writer. However, in light of the Expert Review scandal, it has become a social minefield.

Giving someone a tool that uses unauthorized personas is more than just a bad purchase—it’s an ethical liability. Imagine gifting this to a student who unknowingly uses an expert persona modeled after a professor at their own university. Or giving it to a junior employee who finds themselves receiving feedback from a digital ghost of their own boss. It’s awkward, it’s ghoulish, and it reflects poorly on the giver.

When you choose a gift, you are endorsing the company’s values. By supporting tools that disregard consent and identity, you are contributing to a culture where human creative work is treated as raw material rather than a contribution to be respected. In the professional world, reputation is everything. Gifting a tool that undermines the concept of professional identity is a shortcut to social and professional embarrassment.

THE AI ETHICS CHECKLIST: HOW TO EVALUATE YOUR NEXT TOOL

The Grammarly situation is a wake-up call. We can no longer assume that because a company is large and established, its AI practices are ethical. Before you buy a subscription for yourself or as a gift, you need to vet the product. Use this checklist to ensure you aren't paying for identity theft:

  1. Data Sourcing and Consent: Does the company explicitly state where its training data comes from? More importantly, do they have a clear policy on obtaining consent from the individuals whose work or voices are being used to train the AI?

  2. Persona Transparency: If a tool offers an expert or inspired by feature, ask how those experts are compensated and credited. If the experts aren't mentioned as partners or collaborators, they are likely being exploited.

  3. The Hallucination Factor: Does the tool claim to offer advice from specific people, or is it drawing from broad stylistic categories? Beware of any tool that names names without showing a partnership agreement.

  4. Opt-Out and Privacy Rights: Does the tool allow you to opt-out of having your own writing used to train their models? If they are taking from others, they are likely taking from you, too.

  5. Reputation and History: Does the company have a history of transparency, or do they launch features first and apologize only after they are caught by journalists?

MOVING FORWARD: BETTER WAYS TO POLISH YOUR PROSE

The good news is that you don't need to participate in an ethical grey area to get good writing help. There are plenty of tools that focus on the mechanics of language without trying to wear the skin of a famous editor.

If you are looking for robust writing assistance, tools like ProWritingAid or the standard Microsoft Editor offer excellent grammar and style checking without the creepy persona-mimicking features. Even the standard version of Grammarly—provided you steer clear of the Expert Review prompts—remains a powerful tool for catching basic errors.

The bottom line is simple: we shouldn't have to sacrifice the concept of human identity for the sake of a better-written email. Grammarly’s Expert Review feature isn’t just a step too far; it’s a step in the wrong direction. Until the company can prove it has secured the permission of every expert it seeks to mimic, and until it shows a genuine respect for the living and the dead whose work it uses, we should look elsewhere. Your voice is yours alone—don’t let an AI try to sell it back to you.