Pangram introduces ‘mixed’ detection label to target AI-edited student submissions
New model aims to support educators in identifying blended human-AI submissions with segment-level precision
Pangram, a New York-based company that develops AI detection systems, has released an upgraded version of its AI writing detection model. The update is designed to enhance its ability to identify content generated by advanced language models, including those from OpenAI, Anthropic, and Google’s Gemini.
Used across business and education sectors, Pangram’s tool analyzes text and categorizes it as human, AI-generated, or a combination of both. A new feature now highlights “mixed” texts, offering segment-level breakdowns and percentage estimates of human versus AI content.
The company says the upgrade improves on accuracy levels that already outperformed competitors, particularly when detecting attempts to mask AI usage through paraphrasing or “humanizer” tools.
Focus on student assessment and educator insights
Pangram’s tool is increasingly used in education, where identifying the use of generative AI in student work remains a challenge. The update introduces more granular insight into blended submissions, which are often partially edited by students after generating base content with AI.
Max Spero, CEO of Pangram, said:
“The days of being able to ask ChatGPT to do your assignments, then wash it with Grammarly or QuillBot, and expecting to get away with it – those days are coming to an end. If they’re not already over. With Pangram, teachers are going to spot the use of humanizers with precision and regularity.”
The company claims its detection method avoids reliance on typical metrics like perplexity or burstiness. Instead, it uses a training approach based on “synthetic mirrors” of difficult-to-classify texts, retrained repeatedly to adapt to new model behaviors.
“Our detection technology was already the best because we built it differently,” said Spero. “Pangram does not rely on perplexity and burstiness like other AI classifiers. Pangram is trained using ‘synthetic mirrors’ of the hardest documents to classify and then it is retrained over and over again. That makes it adaptable to new models and doesn’t require significant re-engineering or time-intensive retraining to remain relevant.”
Segment-level visibility for 'mixed' submissions
The new “mixed” classification allows educators to see which portions of text are flagged as human-written versus machine-generated, and the share of each in percentage terms. This update aims to help teachers make more informed judgments when reviewing assignments.
Spero said: “Being able to see a breakdown, seventy-thirty, or ninety-ten, will help teachers make better decisions about what they expected from their students and then determine the actions, if any, that might be necessary. More information for teachers is better, more insight is better. We’re able to do that with Pangram.”