I don't proofread my posts before I publish them... cause I keep my thoughts au naturale.

Friday, December 12, 2025

The AI Gray Area Higher Education Doesn’t Want to Talk About

Illustration representing the gray area of AI use and student privacy in higher education.

Over the past year, many colleges (including the ones I teach at) have moved away from using AI detection tools. The shift is often framed as a privacy issue: student writing cannot legally be uploaded to third-party systems without consent because it violates FERPA. That argument is valid. Student work is protected educational record, and institutions are right to be cautious about where it goes.

But here’s the strange twist: while institutions are restricting faculty use of AI detectors, more and more students are quietly uploading their classmates’ essays into AI tools to complete peer reviews. In other words, the very privacy concerns that are shutting down AI detection are being ignored at the student level—where the violation is actually far more direct.

If an instructor submits a suspicious paragraph to an AI detector, it is usually with the purpose of verifying authorship, often without storing the material. When a student uploads an entire classmate’s draft into ChatGPT or another tool to “analyze the strengths and weaknesses,” they are feeding someone else’s intellectual property into a system with no protection, no institutional oversight, and no guarantee that the work won’t be used to train models. And unlike instructors, who are bound by FERPA training and institutional policy, students typically have no understanding of what they’re exposing.

The irony is hard to ignore. Institutions are protecting student privacy by removing tools from instructors, while at the same time students are unintentionally violating that same privacy during routine coursework. It is a gray area that nobody seems eager to acknowledge.

And here’s the reality: students will not stop using AI for peer review just because we tell them not to. They didn’t stop using it to write essays when we told them not to. Pretending that a warning in a syllabus will fix the issue is wishful thinking.

If colleges are going to disable AI detectors across campuses and forbid instructors from using them—even when writing shows unmistakable patterns that warrant further review—then institutions must also provide a workable alternative. That might involve institution-approved AI environments that keep all student writing within protected systems, or new workflows that allow instructors to document concerns without violating FERPA restrictions. It may mean clearer policy language, LMS-embedded tools that maintain compliance, or consistent procedures that support faculty rather than leaving them on their own.

What cannot continue is the contradictory expectation that instructors identify AI misuse while simultaneously being denied the tools required to verify or investigate it. Students now have unrestricted access to AI for drafting, revising, and peer reviewing, while faculty are expected to “just figure it out” without support or infrastructure. That imbalance not only fosters inconsistency, it undermines the integrity of peer review and the learning process itself.

If higher education wants academic integrity to remain meaningful in an AI-driven landscape, institutions must give educators the compliant tools, clear policies, and practical systems needed to uphold it. Otherwise, the gray area will keep expanding, and instructors will be left enforcing expectations that students themselves have no intention of honoring while AI sits one tab away. At some point, we have to stop pretending that students—who won’t even write their own assignments half the time—are going to safeguard each other’s privacy out of sheer goodwill.