Grammarly, the popular writing assistance platform, is temporarily disabling its “Expert Review” tool following backlash from professionals who objected to the AI using their published work without explicit consent. The feature, launched in August, leveraged publicly available data from large language models (LLMs) to simulate suggestions inspired by the writing styles of influential figures.
The Controversy Explained
Grammarly’s CEO, Shishir Mehrotra, acknowledged the valid criticisms received over the past week. Experts expressed concern that the AI misrepresented their voices and intellectual property. While intended to connect users with authoritative perspectives, the feature unintentionally raised questions about AI-driven imitation and the rights of creators in the age of generative AI.
“We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward.”
— Shishir Mehrotra, Superhuman CEO
The core issue isn’t just about accuracy; it’s about control. Experts want to decide whether their work is used to train AI, how it’s represented, and whether they benefit financially from its use. Grammarly’s initial approach bypassed these considerations, triggering the negative response.
Future Plans: Empowering Experts
Mehrotra announced that Grammarly will reimagine the feature to grant experts greater agency. The goal is to create a system where creators can actively participate, shaping how their knowledge is integrated into AI tools and potentially monetizing their contributions.
Grammarly envisions a broader platform where anyone can build AI agents that function like its existing assistant – effectively opening up the writing ecosystem to third-party extensions. The company pitches this as a way for experts to establish a similar level of user engagement, but only under explicitly defined terms and with clear control over their intellectual property.
Why This Matters
This incident highlights a growing tension between AI development and creator rights. As generative AI becomes more sophisticated, the ability to replicate voices and styles raises critical questions about ownership, attribution, and fair compensation. Grammarly’s response – pausing the feature rather than doubling down – suggests a willingness to address these concerns, setting a potential precedent for how other AI companies navigate similar challenges.
The long-term goal is to shift from AI mimicking experts to AI collaborating with them, with clear consent and benefit-sharing structures in place.
