Tag: Cheat

  • Understanding why students cheat and use AI: Insights for meaningful assessments

    Understanding why students cheat and use AI: Insights for meaningful assessments

    Key points:

    • Educators should build a classroom culture that values learning over compliance
    • 5 practical ways to integrate AI into high school science
    • A new era for teachers as AI disrupts instruction
    • For more news on AI and assessments, visit eSN’s Digital Learning hub

    In recent years, the rise of AI technologies and the increasing pressures placed on students have made academic dishonesty a growing concern. Students, especially in the middle and high school years, have more opportunities than ever to cheat using AI tools, such as writing assistants or even text generators. While AI itself isn’t inherently problematic, its use in cheating can hinder students’ learning and development.

    More News from eSchool News

    Many math tasks involve reading, writing, speaking, and listening. These language demands can be particularly challenging for students whose primary language is not English.

    As a career and technical education (CTE) instructor, I see firsthand how career-focused education provides students with the tools to transition smoothly from high school to college and careers.

    As technology trainers, we support teachers’ and administrators’ technology platform needs, training, and support in our district. We do in-class demos and share as much as we can with them, and we also send out a weekly newsletter.

    Math is a fundamental part of K-12 education, but students often face significant challenges in mastering increasingly challenging math concepts.

    Throughout my education, I have always been frustrated by busy work–the kind of homework that felt like an obligatory exercise rather than a meaningful learning experience.

    During the pandemic, thousands of school systems used emergency relief aid to buy laptops, Chromebooks, and other digital devices for students to use in remote learning.

    Education today looks dramatically different from classrooms of just a decade ago. Interactive technologies and multimedia tools now replace traditional textbooks and lectures, creating more dynamic and engaging learning environments.

    There is significant evidence of the connection between physical movement and learning.  Some colleges and universities encourage using standing or treadmill desks while studying, as well as taking breaks to exercise.

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters. In recent weeks, we’ve seen federal and state governments issue stop-work orders, withdraw contracts, and terminate…

    English/language arts and science teachers were almost twice as likely to say they use AI tools compared to math teachers or elementary teachers of all subjects, according to a February 2025 survey from the RAND Corporation.

    Want to share a great resource? Let us know at [email protected].

    Source link

  • Games and their cheat codes can show universities how to unlock new purpose

    Games and their cheat codes can show universities how to unlock new purpose

    I was recently browsing Board Game Geek, an online forum for nerds who like tabletop games, and came across a thread entitled “anyone have a use for the University?”

    This contained a complaint about the board game Puerto Rico. In Puerto Rico, although the University is potentially a very powerful card, it’s considered too expensive and therefore not worth players’ investment – and I couldn’t help being struck by a resonance with real life higher education in the UK.

    Following the recent increase in tuition fees, reports of students perceiving university education as a poor investment of time and money have proliferated. As such, understanding and communicating the value of higher education has become an increasingly pressing concern.

    Value and metaphor

    In 2024, over 1,000 papers were published which mention the value of higher education, going over themes like economic gain, professional and academic experience, networking, “cultural capital”, and a sense of the value that higher education institutions offer to society in general. Authors explore how value is perceived differently by applicants, students, graduates, staff and the public, and by different demographic communities within these groups. Undoubtedly, the value of higher education is multifaceted and complex.

    A powerful way of understanding value is through metaphor. When we use a metaphor, we ascribe the value of one thing to another. For instance, universities are beacons of knowledge positions universities as guiding lights, illuminating the path to progress (or something).

    Some common metaphors ascribed to universities include: universities are innovators that drive progress and create new ideas; universities are catalysts for personal and societal transformation; and universities are providers which supply a skilled workforce to deliver economic growth.

    When metaphors are layered together, they become a narrative – a way of conveying greater meaning through interconnected symbols. Games, as a form of interactive storytelling, take this concept even further. They combine metaphors with player agency, allowing players to actively engage with and shape the narrative. In games, players don’t just passively observe metaphors at work; they inhabit and interact with them.

    The player of games

    Because games are dynamic, this means that universities appear in games only when they are actively doing something: acting on the simulation and changing the outcome for the player. Analysing these dynamics leads to some thought-provoking insights into how universities are perceived as acting on the real world, and therefore what value higher education holds in society.

    Our most familiar metaphors for universities are easily recognisable in games. For example, in strategy games such as Age of Empires, universities are innovators which generate “research points” which can be spent to unlock new things. In city-building games like Megapolis, universities are providers that give the player more resources in the form of workers. In Cities: Skylines, universities are catalysts for growth: once a citizen has attended university their home will be upgraded to higher building levels, and they can get better jobs, which in turn levels up their place of employment.

    To return to Puerto Rico: in the normal rules of the board game, players can “construct” a building (such as a factory or warehouse) but cannot use it until the next “mayor phase” is triggered, at which point they can be “staffed”, and its benefits can be used by the player thereafter. The university card grants the player the ability to both “construct” and “staff” new buildings instantly, without waiting. This significantly speeds up the gameplay for the owner of the card.

    When used in this way, the university card changes the mechanics of the game for the player who can use it.

    Puerto Rico is not alone in this. For example, in Struggle for Catan, the university card allows the possessor to buy future cards more easily by swapping one required resource for any other kind. This has such an unbalancing effect that it changes the game from that point onwards. As one Board Game Geek user puts it:

    When I play with my wife we ban the University to keep it a friendly game […] In a four player game everyone just gangs up on whoever gets the University.

    In both of these games, universities are cheat codes: “a secret password […] that makes something unusual happen, for example giving a player unusual abilities or allowing them to advance in the game.”

    Cheat codes are used by players to create exceptions to the standard game rules everyone else must abide by. Universities change the mechanics of the game and enable players to act in a way that would be otherwise impossible.

    Real-life cheat codes

    The idea of students using universities to gain an advantage is not new. When university strategies talk about “transforming students’ lives”, this is generally what they’re referring to. “Educational gain”, “cultural capital”, “graduate attributes”, and “personal development”, are all facets of the same sort of idea.

    However, I’d argue that using the metaphor of a “cheat code” forces us to see students as active players who are using their experiences agentically and strategically, rather than just passively receiving something. When a player uses a cheat code, they generally have an intention in mind. Using the game metaphor reminds us to see students as individual players, who are interested in developing their own palette of cheat codes for their own personal goals.

    If the value of a university experience for students is in developing and testing cheat codes, then we should be intentionally structuring higher education to teach the most effective “hacks”. As Mark Peace has argued on this site in the past, we mustn’t be complacent about the process by which students “catch” transferrable skills. We need to be much more intentional about how we scaffold the development of these cheat codes, and how we work collaboratively with students to identify the skills they want to build and create meaningful ways to help them develop their own toolbox of cheat codes.

    Without this, there is a real danger that we will return to the original scenario of this article, the forum post bemoaning the high-cost, low-return of the university card in Puerto Rico. We must guard against the “university card” being almost unplayable, because it is too expensive, not flexible enough, or too dated. The challenge to institutions is to ensure our provision is more like the university card in Struggle for Catan: truly game-breaking.

    Thinking about universities in terms of game design invites us to rethink the rules we’re playing by and imagine a world where some rules don’t apply. It’s a reminder that the narratives that shape higher education aren’t set in stone. Players have autonomy and can change the direction of the game. This might mean building a toolbox for life with students – and for us, it means taking a wider look at the system we’re part of. What would it look like to recover our agency and, as Edward Venning puts it on HEPI recently, “recover an assertive self-confidence”? For too long, universities have been stuck playing the game instead of changing the rules.

    Source link

  • Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)

    Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


    This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

    The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.

    Starting the case.

    The coverage reports that:

    all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 

    Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.

    I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.

    Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.

    Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.

    Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.

    Denied it.

    The article continues that Yang, the student:

    denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 

    Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.

    Does anyone actually read those things?

    Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”

    With friends like these.

    Also interesting is that, according to the coverage:

    His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 

    That would be a fair point except that the next paragraph is:

    Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 

    That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.

    That guy says:

    “I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”

    Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.

    As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.

    You know what I am going to say.

    To me, that entire Dowd diversion is mostly funny.

    More evidence.

    Back on track, we get even more detail, such as that the exam in question was:

    an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 

    Exam graders argued the AI use was obvious enough. Yang disagrees. 

    Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 

    So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.

    I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.

    Then there’s even more evidence:

    Yang also objects to professors using AI detection software to make their case at the November hearing.  

    He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 

    “Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  

    First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.

    A pattern.

    Then we get this, buried in the news coverage:

    Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  

    He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 

    In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.

    She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 

    I’m sorry — what now?

    Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:

    “re write it, make it more casual, like a foreign student write but no ai.” 

    He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?

    For that one where he left in the prompts to ChatGPT:

    the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 

    Yang was warned, in writing.

    If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.

    On the other hand, he did say he did not do it.

    Findings, review.

    Further:

    But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.

    Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:

    “PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 

    Slow clap.

    And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.

    Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.

    MPR, spare me.

    Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.

    Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:

    Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 

    “As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 

    We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.

    That’s just bad journalism.

    And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:

    it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 

    Absolute joke.

    I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.

    MPR also quotes a student who says:

    she and many other students live in fear of AI detection software.  

    “AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 

    Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.

    Two final bits.

    The MPR story shares:

    In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 

    Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.

    The MPR article wraps up that Yang:

    found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career

    Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.

    And finally:

    “Probably I should think to do something, selling potatoes on the streets or something else,” he said. 

    Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.

    Source link