My Professor Says It is AI

by Sam Silva ’28 on February 27, 2026


Opinion - Campus


Many professors would call artificial intelligence usage a crisis in the classroom, and I do not disagree that AI is becoming an issue. Nonetheless, this raises the question; who is going to protect the students from being wrongfully accused of using AI? In the ongoing conversation of where AI belongs inside schools, is the use of AI detection programs adequate? Oftentimes, students who are good writers are accused of using AI when utilizing proper grammar or professional punctuation such as the Oxford comma.

While I have never been accused of using AI, many people I know have. My sister, for one, who is the best writer I know, has. For some context, when I need someone to peer edit my essay, help me word a sentence, or group my thoughts, I ask my sister. Last semester after submitting a writing assignment, she received a very poor grade from her professor, with one comment highlighting that she used AI. Immediately, she emailed the professor that she definitely did not use AI and could share the transcripts of her writing to prove it. He then professed that her writing was too “pedantic,” meaning too wordy or detailed, but he said he would raise the grade. He only raised it to an 80 percent, which in itself was not an accurate representation of her writing. So, the 80 she received was likely an unjustified poor grade on a well-written paper because a professor had an inclination. My sister is not the only person to have been wrongfully accused of using AI. Another friend of mine was accused of using AI for a final paper and the professor was so sure of it. When the student met with the professor and confessed their innocence, the professor wouldn’t hear it and kept the grade a 0 percent.

While I will acknowledge the difficulties that come with AI use, how can a professor just be allowed to accuse you of using AI, when so many studies show the inaccuracies of AI detection tools? A study published by Advances in Simulation investigated the ability of humans and AI detection tools to accurately detect AI. They used three different detection tools (GPTZero, Grammarly, Phrasly AI) as well as humans to differentiate between a few different levels of AI usage and human-written work. They found that human detection of AI is indistinguishable from just guessing if it is AI-generated. While the AI detectors were more reliable, they still heavily ranged in accuracy from 57 percent to 95 percent. The conclusion of this study found that while these detectors can be helpful in potentially determining AI use, they should not be relied upon. 

Here is the problem with AI detection: it has a much more significant bias towards great writers and people whose first language is not English. Detectors look for small grammar mistakes, predictability and structure of the writing, as well as natural human tone and style. Yet, in the case of academic writing, a great writer does not make mistakes, the tone of their writing would be less human and “conversational.” Instead, it would be formal and effectively polished, it would most likely sound impersonal. Furthermore, someone whose first language is not English would be targeted by AI detection. The detector looks for predictability and often non-native writers will use shorter sentences or simpler grammar, which is identified with AI-generated writing. AI is trained from native English speakers, it understands how they write and it copies that. So, when a non-native English speaker writes, it is detected as AI-generated because it has not been trained to understand the grammar and structure of those writers.

Students should be protected from the accusations that too often get thrown around. The damage from being wrongfully accused of AI is detrimental for a student’s academic career and more importantly their mental well-being. And until AI detection makes leaps and bounds of progress in accuracy, it should not be used for judgment.