The website Transparency Support makes it easy for a teacher to make their expectations for AI use clear. It also help students disclose their usage. By checking a few boxes, teachers and students generate custom statements to define expectations or disclose usage, ready to copy directly into any assignment. Teachers might even bring it up in front of student to co-create parameters for an assignment. We have linked it in the P-CCS AI Guidance. While transparency is powerful, it is only a piece of this complex topic.
Academic dishonesty is often like speeding: a situational decision based on risk and reward rather than just character. Research on academic dishonesty suggests it is driven by opportunity and pressure. To address this, we must “redesign the road” by pairing psychological insights with practical tools like Transparency Support, the AI Usage Scale, and Proof of Positive Authorship to eliminate ambiguity.
The Psychology of Misconduct
Understanding the underlying drivers of cheating allows us to move from reactive policing to proactive prevention. Research highlights three critical factors:
The “Cheater’s Triangle”
According to Routine Activity Theory, misconduct isn’t random. It requires three elements: a motivated offender (stressed student), a suitable target (outsourcable assignment), and a missing guardian (lack of barriers). Effective guardianship involves “designing out” opportunities through personalized assessments rather than relying solely on proctoring.

The Calculation of Risk vs. Reward
Research on risk versus reward identifies a tipping point: cheating is rare on assignments worth 10% but spikes at 30%. To mitigate this, educators might consider prioritizing frequent, lower-stakes assessments to keep temptation low.
Practical Strategies
While psychology explains the why, specific tools provide the how for prevention. Cheating is deterred by Moral Alignment (internal values) and the teacher acting as a capable guardian.
Moral Alignment
To cultivate Moral Alignment, the P-CCS AI Guidance recommends integrating AI literacy using resources like Michigan Virtual’s Student Guide to AI and Common Sense Media lessons (P-CCS AI Guidance). These tools help students build an internal ethical compass, which must be balanced with necessary external checks.
Capable Guardianship
Uncertainty often fuels misconduct. When rules are unclear, the line between resourcefulness and dishonesty blurs.
- The AI Usage Scale: P-CCS AI Guidance recommends a standardized scale (e.g., “No AI” to “Full AI Collaboration”) to create a shared language. Students can simply check the “Level” to understand boundaries without deciphering complex policies.
- Transparency Support: As a newly introduced resource, this website serves as the practical bridge between high-level policy and daily classroom instruction. While the Usage Scale sets the general “level,” the Transparency Support site provides the specific “rules of the road.” It functions as a centralized hub where educators can access and generate standardized language for their assignments, explicitly listing which tools are permitted (e.g., “Fixing grammar and spelling”) versus which actions are prohibited (e.g., “Adjusting tone”). By providing this level of granularity, it effectively removes the “gray area” where ambiguity often leads to accidental misconduct.
- Proof of Positive Authorship (PPA) secures the process rather than relying on unreliable AI detectors. PPA empowers students to prove they did the work by emphasizing creation over the final product:
-
- Version History: Using Google Docs to make the writing process visible. Use the SchoolAI extension to assist.
- Scaffolded Drafts: Grading outlines and drafts, not just the final essay.
- This eliminates “opportunity” by focusing on the process. However, educators must remain nuanced: version history isn’t foolproof (paid extensions can mimic typing) and legitimate accommodations (voice-to-text) can look suspicious. PPA should be a holistic conversation, not just a technical check.
Conclusion
Cheating is often a situational response, not just a character flaw. By combining lower-stakes assessments with the clarity of Transparency Support and the process-focus of Proof of Positive Authorship, we create environments where integrity is the most logical and rewarding path.
AI Help Statement (generated from Transparency Support)
I used Gemini and NotebookLM to help me with organizing ideas, summarizing text, starting a rough draft, rewording, and creating visuals. I contributed by researching, curating resources, suggesting edits, rewording for clarity and voice, reorganizing the structure, rewriting to align with my goals, and collaborating with a colleague.


















