The University of Notre Dame has sparked an ongoing debate among its faculty over using artificial intelligence (AI) tools in academics. This comes after the institution updated its policy, classifying tools like Grammarly as generative AI. This allows professors to ban them from their courses to maintain academic integrity.
Grammarly’s Evolution Sparks Policy Change
Last spring finals saw a series of honor code violations involving Grammarly, ten of which ended up as “educational outcomes” rather than disciplinary actions. Professors began noticing inconsistencies in student’s writing as they became more “bland” and “formulaic.” Students accused of using generative AI would insist they wrote their pieces themselves but admit to using Grammarly for revisions.
Grammarly started as a basic tool in 2009 for improving grammar and spelling. But as many software went, AI transformed what seemingly was a naive tool into a generative AI machine that can spurt full sentences and phrases. These features prompted the policy update in August by the Office of Academic Standards, prohibiting the use of editing tools unless explicitly stated otherwise by the professor.
Faculty members have become divided on its implications, especially as AI becomes more integrated not just in academia but also in students’ future careers. Debate ensued regarding upholding academic standards, revealing differing perspectives on the best path forward.
Divided Opinions on AI Tools in the Classroom
Gerard Powers from the Kroc Institute for International Peace Studies firmly believes that relying on AI tools undermines the learning process. Powers firmly stated, “If Grammarly is editing it for you, then that’s a misuse of AI. It’s plagiarism, period.” He argues that students need to learn grammar independently to become strong writers rather than depending on AI to do their work.
Others were not keen on having a complete ban as the answer. Damian Zurro, a professor with the University Writing Program, believes that traditional Grammarly features, such as advanced grammar checks, can be valuable.
“I’m worried that this policy is starting to paint a bright line that puts Grammarly on the wrong side of that line,” he stated. Zurro hopes for more nuanced guidelines that help students navigate AI ethically without hindering access to tools that improve their writing.
Nathaniel Myers, another writing professor, doesn’t think the entire policy should be based on this premise. “I want them to undergo the writing process for themselves in ways that aren’t immediately turning to a tool that’s helping them write, because there’s value in the friction that’s a part of that work and learning that happens in writing without those tools,” Myers said. “On the flip side, I want them to have the rhetorical knowledge and the skills to navigate these tools in ways they may need for their professional lives and other parts of their lives as well.”
So far, no mass communication has been sent to students about the new policy, leaving many unsure of where they stand on AI use in their classes. “It’s hard because faculty are all over the place. Some want students to use it all the time for everything and others don’t want students to touch it all,” said Ardea Russo, director of the Office of Academic Standards. “We’re trying to thread the needle and create something that works for everyone.”
The ongoing debate at Notre Dame reflects a widespread challenge across higher education. As AI continues to evolve, institutions must find ways to adapt policies that both protect academic integrity and embrace new technologies.