When students turn to AI, they often do so because...

When students turn to AI, they often do so because they believe it will save them time. Credit: AP / Michael Dwyer

This guest essay reflects the views of Amanda M. Capelli, of Hicksville, a clinical associate professor in New York University's Expository Writing Program.

Before fall classes began, I received a message from OpenAI, sent to my New York University email, letting me know that GPT-5 was here — their "smartest, fastest, and most useful model yet." "Try it for free!" it prompted. This past semester NYU also piloted a partnership with Google's AI "assistant," Gemini. Generative AI use pervades the college experience and continues to grow at every level from students to administrators. Some universities, including on Long Island, have capitalized on that growth with new courses and degrees or by opening AI departments.

Still, many educators across grade levels worry about what embracing generative AI truly means for their students, and cheating/plagiarism remains their most widely cited concern. But if almost two decades of teaching have shown me anything, it's that developing pedagogy based around fears of student cheating isn't a viable answer. The recent lawsuit by an Adelphi University student is just one of many cases where AI-detection software has created more problems than it solves.

In the rising age of generative AI, we must focus our energy not on catching cheaters, but on teaching students the value of vulnerability — which is intertwined with learning. This will require a bigger academic reorientation than it might sound like at first.

The same Tyton Partners study that shows increasing use of generative AI also reveals that students still prefer in-person interaction when they are struggling with a concept. Last spring, students reported that they preferred to interact with their instructor or a peer 84% of the time. Another study suggests that cheating rates among high school students have remained essentially the same even after ChatGPT.

When students turn to AI, they often do so because they believe it will save them time. Part of "saving time" means they can skip over the messy, try-out-a lot-of-bad-ideas-first part of learning. AI skips all of that and just offers the "good" idea, or at least an idea that a student accepts as one, but it also prevents them from experiencing the vulnerability that comes with putting something personal out into the world.

Learning requires vulnerability, and being open to critique. We have to stay curious despite potential failure. But valuing the vulnerability that comes with letting ourselves fail is at odds with a system that grades the very attempt.

If we want to value the human mind, we need to reward all the messy vulnerability that comes with it. Students should feel free to take risks and fail, but to really do that, we need to completely reimagine what grading looks like at the university level. The rise of generative AI in classrooms has revealed a deeper problem than the issue of student cheating. It has intensified the debate around grade inflation and what getting an "A" in a college course really means.

Combating this problem is bigger than asking students to use a pen instead of a keyboard. It will require a complete revision of grading practices and standards. Until grade inflation is tackled by university administrations, it will be up to individual teachers to show students that their minds are valued above all else. This comes with certain risks for faculty — of lower grades that may lead to negative student evaluations, or that by valuing process over product we make grade inflation worse. But risk is part of the endgame. We must value vulnerability by being vulnerable ourselves.

This guest essay reflects the views of Amanda M. Capelli, of Hicksville, a clinical associate professor in New York University's Expository Writing Program.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME