Students don’t have the same incentives to talk to their professors — or even their classmates — anymore. Chatbots like ChatGPT, Gemini and Claude have given them a new path to self-sufficiency. Instead of asking a professor for help on a paper topic, students can go to a chatbot. Instead of forming a study group, students can ask AI for help. These chatbots give them quick responses, on their own timeline.
For students juggling school, work and family responsibilities, that ease can seem like a lifesaver. And maybe turning to a chatbot for homework help here and there isn’t such a big deal in isolation. But every time a student decides to ask a question of a chatbot instead of a professor or peer or tutor, that’s one fewer opportunity to build or strengthen a relationship, and the human connections students make on campus are among the most important benefits of college.
Julia Freeland-Fisher studies how technology can help or hinder student success at the Clayton Christensen Institute. She said the consequences of turning to chatbots for help can compound.
“Over time, that means students have fewer and fewer people in their corner who can help them in other moments of struggle, who can help them in ways a bot might not be capable of,” she said.
As colleges further embed ChatGPT and other chatbots into campus life, Freeland-Fisher warns lost relationships may become a devastating unintended consequence.
Asking for help
Christian Alba said he has never turned in an AI-written assignment. Alba, 20, attends College of the Canyons, a large community college north of Los Angeles, where he is studying business and history. And while he hasn’t asked ChatGPT to write any papers for him, he has turned to the technology when a blank page and a blinking cursor seemed overwhelming. He has asked for an outline. He has asked for ideas to get him started on an introduction. He has asked for advice about what to prioritize first.
“It’s kind of hard to just start something fresh off your mind,” Alba said. “I won’t lie. It’s a helpful tool.” Alba has wondered, though, whether turning to ChatGPT with these sorts of questions represents an overreliance on AI. But Alba, like many others in higher education, worries primarily about AI use as it relates to academic integrity, not social capital. And that’s a problem.
Jean Rhodes, a psychology professor at the University of Massachusetts Boston, has spent decades studying the way college students seek help on campus and how the relationships formed during those interactions end up benefitting the students long-term. Rhodes doesn’t begrudge students integrating chatbots into their workflows, as many of their professors have, but she worries that students will get inferior answers to even simple-sounding questions, like, “how do I change my major?”
A chatbot might point a student to the registrar’s office, Rhodes said, but had a student asked the question of an advisor, that person may have asked important follow-up questions — why the student wants the change, for example, which could lead to a deeper conversation about a student’s goals and roadblocks.
“We understand the broader context of students’ lives,” Rhodes said. “They’re smart but they’re not wise, these tools.”
Rhodes and one of her former doctoral students, Sarah Schwartz, created a program called Connected Scholars to help students understand why it’s valuable to talk to professors and have mentors. The program helped them hone their networking skills and understand what people get out of their networks over the course of their lives — namely, social capital.
Connected Scholars is offered as a semester-long course at U Mass Boston, and a forthcoming paper examines outcomes over the last decade, finding students who take the course are three times more likely to graduate. Over time, Rhodes and her colleagues discovered that the key to the program’s success is getting students past an aversion to asking others for help.
Students will make a plethora of excuses to avoid asking for help, Rhodes said, ticking off a list of them: “‘I don’t want to stand out,’ ‘I don’t want people to realize I don’t fit in here,’ ‘My culture values independence,’ ‘I shouldn’t reach out,’ ‘I’ll get anxious,’ ‘This person won’t respond.’ If you can get past that and get them to recognize the value of reaching out, it’s pretty amazing what happens.”
Connections are key
Seeking human help doesn’t only leave students with the resolution to a single problem, it gives them a connection to another person. And that person, down the line could become a friend, a mentor or a business partner — a “strong tie,” as social scientists describe their centrality to a person’s network. They could also become a “weak tie” who a student may not see often, but could, importantly, still offer a job lead or crucial social support one day.
Daniel Chambliss, a retired sociologist from Hamilton College, emphasized the value of relationships in his 2014 book, “How College Works,” co-authored with Christopher Takacs. Over the course of their research, the pair found that the key to a successful college experience boiled down to relationships, specifically two or three close friends and one or two trusted adults. Hamilton College goes out of its way to make sure students can form those relationships, structuring work-study to get students into campus offices and around faculty and staff, making room for students of varying athletic abilities on sports teams, and more.
Chambliss worries that AI-driven chatbots make it too easy to avoid interactions that can lead to important relationships. “We’re suffering epidemic levels of loneliness in America,” he said. “It’s a really major problem, historically speaking. It’s very unusual, and it’s profoundly bad for people.”
As students increasingly turn to artificial intelligence for help and even casual conversation, Chambliss predicted it will make people even more isolated: “It’s one more place where they won’t have a personal relationship.”
In fact, a recent study by researchers at the MIT Media Lab and OpenAI found that the most frequent users of ChatGPT — power users — were more likely to be lonely and isolated from human interaction.
“What scares me about that is that Big Tech would like all of us to be power users,” said Freeland-Fisher. “That’s in the fabric of the business model of a technology company.”
Yesenia Pacheco is preparing to re-enroll in Long Beach City College for her final semester after more than a year off. Last time she was on campus, ChatGPT existed, but it wasn’t widely used. Now she knows she’s returning to a college where ChatGPT is deeply embedded in students’ as well as faculty and staff’s lives, but Pacheco expects she’ll go back to her old habits — going to her professors’ office hours and sticking around after class to ask them questions. She sees the value.
She understands why others might not. Today’s high schoolers, she has noticed, are not used to talking to adults or building mentor-style relationships. At 24, she knows why they matter.
“A chatbot,” she said, “isn’t going to give you a letter of recommendation.”
In telling the story across different platforms, the important thing is to think about who you tell the story to. Imagine talking to them in person. You wouldn’t drone on with facts and data, you would get to what your story is really about.
The great thing is that in publishing across platforms through different types of media, you don’t need fancy equipment or fancy sound or video editing techniques.
Instead, the people who know how to do all that often go out of their way to make things look more raw, because raw looks more authentic and authentic is what many media consumers value.
You can even use an AI program to help you create images, but make sure you tell your audience that you did that. In telling true stories you don’t want to mislead or misinform.
Agencies in at least 28 states and the District of Columbia have issued guidance on the use of artificial intelligence in K-12 schools.
More than half of the states have created school policies to define artificial intelligence, develop best practices for using AI systems and more, according to a report from AI for Education, an advocacy group that provides AI literacy training for educators.
Despite efforts by the Trump administration to loosen federal and state AI rules in hopes of boosting innovation, teachers and students need a lot of state-level guidance for navigating the fast-moving technology, said Amanda Bickerstaff, the CEO and co-founder of AI for Education.
“What most people think about when it comes to AI adoption in the schools is academic integrity,” she said. “One of the biggest concerns that we’ve seen — and one of the reasons why there’s been a push towards AI guidance, both at the district and state level — is to provide some safety guidelines around responsible use and to create opportunities for people to know what is appropriate.”
North Carolina, which last year became one of the first states to issue AI guidance for schools, set out to study and define generative artificial intelligence for potential uses in the classroom. The policy also includes resources for students and teachers interested in learning how to interact with AI models successfully.
In addition to classroom guidance, some states emphasize ethical considerations for certain AI models. Following Georgia’s initial framework in January, the state shared additional guidance in June outlining ethical principles educators should consider before adopting the technology.
In the absence of regulations at the federal level, states are filling a critical gap, said Maddy Dwyer, a policy analyst for the Equity in Civic Technology team at the Center for Democracy & Technology, a nonprofit working to advance civil rights in the digital age.
While most state AI guidance for schools focuses on the potential benefits, risks and need for human oversight, Dwyer wrote in a recent blog post that many of the frameworks are missing out on critical AI topics, such as community engagement and deepfakes, or manipulated photos and videos.
“I think that states being able to fill the gap that is currently there is a critical piece to making sure that the use of AI is serving kids and their needs, and enhancing their educational experiences rather than detracting from them,” she said.
Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: [email protected].
I see many students viewing artificial intelligence as humanlike simply because it can write essays, do complex math or answer questions. AI can mimic human behavior but lacks meaningful engagement with the world.
This disconnect inspired my course “Art and Generative AI,” which was shaped by the ideas of 20th-century German philosopher Martin Heidegger. His work highlights how we are deeply connected and present in the world. We find meaning through action, care and relationships. Human creativity and mastery come from this intuitive connection with the world. Modern AI, by contrast, simulates intelligence by processing symbols and patterns without understanding or care.
In this course, we reject the illusion that machines fully master everything and put student expression first. In doing so, we value uncertainty, mistakes and imperfection as essential to the creative process.
This vision expands beyond the classroom. In the 2025-26 academic year, the course will include a new community-based learning collaboration with Atlanta’s art communities. Local artists will co-teach with me to integrate artistic practice and AI.
The course builds on my 2018 class, Art and Geometry, which I co-taught with local artists. The course explored Picasso’s cubism, which depicted reality as fractured from multiple perspectives; it also looked at Einstein’s relativity, the idea that time and space are not absolute and distinct but part of the same fabric.
What does the course explore?
We begin with exploring the first mathematical model of a neuron, the perceptron. Then, we study the Hopfield network, which mimics how our brain can remember a song from just listening to a few notes by filling in the rest. Next, we look at Hinton’s Boltzmann Machine, a generative model that can also imagine and create new, similar songs. Finally, we study today’s deep neural networks and transformers, AI models that mimic how the brain learns to recognize images, speech or text. Transformers are especially well suited for understanding sentences and conversations, and they power technologies such as ChatGPT.
In addition to AI, we integrate artistic practice into the coursework. This approach broadens students’ perspectives on science and engineering through the lens of an artist. The first offering of the course in spring 2025 was co-taught with Mark Leibert, an artist and professor of the practice at Georgia Tech. His expertise is in art, AI and digital technologies. He taught students fundamentals of various artistic media, including charcoal drawing and oil painting. Students used these principles to create art using AI ethically and creatively. They critically examined the source of training data and ensured that their work respects authorship and originality.
Students also learn to record brain activity using electroencephalography – EEG – headsets. Through AI models, they then learn to transform neural signals into music, images and storytelling. This work inspired performances where dancers improvised in response to AI-generated music.
The Improv AI performance at Georgia Institute of Technology on April 15, 2025. Dancers improvised to music generated by AI from brain waves and sonified black hole data.
Why is this course relevant now?
AI entered our lives so rapidly that many people don’t fully grasp how it works, why it works, when it fails or what its mission is.
In creating this course, the aim is to empower students by filling that gap. Whether they are new to AI or not, the goal is to make its inner algorithms clear, approachable and honest. We focus on what these tools actually do and how they can go wrong.
We place students and their creativity first. We reject the illusion of a perfect machine, but we provoke the AI algorithm to confuse and hallucinate, when it generates inaccurate or nonsensical responses. To do so, we deliberately use a small dataset, reduce the model size or limit training. It’s in these flawed states of AI that students step in as conscious co-creators. The students are the missing algorithm that takes back control of the creative process. Their creations do not obey AI but reimagine it by the human hand. The artwork is rescued from automation.
What’s a critical lesson from the course?
Students learn to recognize AI’s limitations and harness its failures to reclaim creative authorship. The artwork isn’t generated by AI, but it’s reimagined by students.
Students learn chatbot queries have an environmental cost because large AI models use a lot of power. They avoid unnecessary iterations when designing prompts or using AI. This helps reducing carbon emissions.
The Improv AI performance on April 15, 2025, featured dancer Bekah Crosby responding to AI-generated music from brain waves.
The course prepares students to think like artists. Through abstraction and imagination they gain the confidence to tackle the engineering challenges of the 21st century. These include protecting the environment, building resilient cities and improving health.
Students also realize that while AI has vast engineering and scientific applications, ethical implementation is crucial. Understanding the type and quality of training data that AI uses is essential. Without it, AI systems risk producing biased or flawed predictions.
Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.
Depending on the era in which we began learning, formally or informally, we all have a diverse range of valuable definitions and perspectives about artificial intelligence pertaining to teaching and learning.
Teaching and learning with AI
When I was a student in elementary school during the 1980s, AI was using a calculator rather than longhand or a traditional adding machine for arithmetic. Additionally, the entire school only had six computers for student use, which were housed in the library. Only students acting responsibly earned access to time on these devices to play The Oregon Trail, “an educational game that simulates the hardships of [1848] …”
With all of this in mind, AI has been teaching us, and we’ve been learning from it, for quite some time, in and out of school.
However, with the advancement of generative AI, the implications for teaching and learning now have to do more with academic integrity. And academic dishonesty policies about original work vs. AI in education have come into the conversation. This is where MindTap features like Turnitin can be applied to help monitor students’ acceptable and ethical use of AI in English composition courses.
My conversation with students
My students may engage in conversations about acceptable, ethical uses of GenAI and academic integrity before they even enroll in my courses. This is because I post the policies in my syllabus. Students learn that there is a monitoring system in place in MindTap for English by Turnitin. Once enrolled in MindTap, there are discussions, in both online and face-to-face modalities, about these policies at length. Policies are also copied into each of the writing assignments in MindTap. Our focus is on ethics, or academic integrity, to ensure students’ coursework is original. Valuable feedback, information and resources can be provided for students to learn and progress rather than to get a grade.
Since students cannot prove learning and mastery of learning outcomes without work being original, I discuss with them and copy in their assignments that they should not use any words that are not original. MindTap provides me with access to Turnitin to monitor academic integrity.
Suggestions for monitoring
To help monitor students’ use of AI, parameters in MindTap for English with Turnitin should be set. For example, students need to submit more than 300 words for the detector to perform. Once students submit work, the detector generates an originality report. This can be downloaded to provide the instructor and learner with feedback about the percentage amount of acceptable and ethical usage of AI or plagiarism.
Inbox where the similarity percentage can be viewed and clicked on for expanded, detailed information.
The report highlights where originality is in question directly on the student’s document. Some instructors will set percentage parameters as well, instructing students that there cannot be more than 15% flagged by the detector in MindTap. Clicking on what the detector has highlighted shows the possible source where information may have been taken or just generally that AI has been used. Note: this is just a monitoring system. So, please be mindful that the report is a tool instructors can use to have conversations with their students. We cannot accuse academic dishonesty based on a report alone.
Shows the match overview with all of the plagiarism flagged, which can also be AI. Each part can be clicked on and expanded to show the original source.
MindTap’s monitoring system has always been correct for me, but conversations are still beneficial for assurance. I use this monitoring document for every submission in MindTap.
The big picture to consider
AI can be used ethically as a tool for teaching and learning, bridging student learning gaps and strengthening their mastery of skills. However, when it comes to academic integrity, the concern is that GenAI is being used not as an aid, but as a tool devoid of the values of teaching and learning. According to Cengage’s recent research, 82% of instructors expressed concern specifically about AI and academic integrity. Setting policies and parameters with clear definitions and having conversations with students is essential to my ability to monitor my students’ acceptable use of AI.
Do you use AI in your English composition classroom? Reach out to discuss the ways you’re utilizing AI as an ethical tool to advance teaching and learning.
Written by Faye Pelosi, Professor in the Communications Department at Palm Beach State College and Cengage Faculty Partner.
Stay tuned for Professor Pelosi’s upcoming video demo of how she uses the MindTap Turnitin feature in her English course.
I recently attended a student panel on use of AI in college classes. The three panelists shared their perspectives, borne out in an April 2025 Gallup survey: they want to learn to use AI tools in appropriate ways, they’re anxious about the potential negative impact on their critical thinking skills, and they want explicit guidance from their faculty members—which they’re not consistently getting.
Even, or especially, in an AI age, we know our students need to think critically and learn authentically to be productive and engaged citizens and future employees. We recognize we need to update assignments to promote academic integrity in an AI age, yet many of us feel unprepared to do so. The crux of the problem is this: how do we foster critical thinking and authentic learning when it’s so easy for students to outsource their cognitive work?
One promising solution to the triple challenge of fostering critical thinking, meaningful learning, and academic integrity is to double down on transparency. We can provide the guidance students want, embed analysis and evaluation into our assignments to get at that all-important critical thinking, and nudge students toward integrity. How? By embracing transparency.
Let’s detail our expectations for responsible AI use. Let’s emphasize the why: why we want students to use their own cognitive abilities for some tasks, why using AI could be helpful at times, and why we’ve crafted our AI-integrated assignments in the ways we have. Finally, let’s ask students to disclose their use in full transparency (and reflect on their learning, as Marc Watkins argues here).
In short, we can promote integrity when assessments are meaningful, purposeful, and include transparent instruction on appropriate AI use. We can teach students to critically analyze AI-assisted work and promote authentic learning as we do. To help us achieve these aims, here are five steps to update assignments in ways that help students learn the skills they need while fostering a culture of academic honesty.
Step 1: Take a critical look at your current syllabus.
If you’re like me, you have tried-and-true activities and assignments that you’ve relied on for many years. Some of these may be updateable, but others may just need to go. If AI can easily complete a task (try running your instructions through a tool like ChatGPT to find out), maybe it’s no longer a relevant measure of authentic learning. Also, following the remaining steps below will likely mean that you add new instructional practices (like modelling AI use) and new components of the assignments you update (like a reflection on how AI helped or hindered learning). Make time for these important new course elements by getting rid of outdated tasks.
Step 2: Consider whether and how students should use AI on the assignments you keep.
Remember, students want to know exactly what is appropriate for AI use in your class. A helpful tool for this process is the 5-level AI Assessment Scale (AIAS). The levels range from No AI, AI Planning, AI Collaboration, Full AI, and AI Exploration. Each one identifies and sanctions different ways students can use AI in appropriate and meaningful ways to support their learning.
Leon Furze, one of the creators of the AIAS, presents it a tool that helps update our assignments and start conversations with our students. He recommends using this framework to guide your thinking: break down each assignment, project, or discussion board activity into disparate steps, then decide for which parts of the process AI use is appropriate. Providing transparent guidance, guidance that may vary for different parts of the project, is what students need right now.
Step 3: Carve out time to discuss and model your expectations.
Many students are not sure what is acceptable in this current moment. What better way to help them feel confident while developing the AI skills they need than modelling what you’re looking for? Take class time or record a video for your online class to teach your students what you expect them to do with AI for each assignment, what not to do, and what you’ll be looking for in their finished product.
As you approach this aspect of instruction, take heart. I don’t think every instructor needs to be an AI expert, but I do think we’re doing our students a disservice if we’re not teaching them how to be AI Whisperers, a concept I gleaned from Ethan Mollick’s 2024 book, Co-Intelligence. A new kind of expertise is emerging, he argues: the skilled and effective use of AI in every field. As disciplinary experts, you know what kind of AI use will help students learn your material and what will disempower them as learners. Let’s allocate time to help students develop AI literacy while maintaining their agency.
Step 4: Ask students to disclose their AI use, as noted above.
The student panelistsshowed palpable relief when they described instructors who simply had them cite their use of AI. “Encourage your students to be transparent about their AI use,” one student, a finance major, said. As she talked about how simple it was to add a line at the end of the assignment such as, “ChatGPT generated the outline for this essay,” the anxiety that had previously clouded her expression disappeared.
One approach is to use the AI Disclosure (AID) framework to document how students used AI, or add an appendix to each assignment, or add comments or footnotes to make transparent what they wrote and what AI wrote. Get students to declare their AI use and even better, ask them to reflect on whether their use helped or hindered their learning.
Step 5: If you suspect inappropriate use of AI, don’t accuse students of cheating.
Instead, have a conversation with them. Recall that a primary goal of the AIAS is to facilitate discussions about AI use. Imagine that as a class, you agreed that Level 2 (AI Planning) was appropriate for a particular project. If a student submits work that seems to have been completed at Level 4 (Full AI), refer back to the framework to facilitate a conversation about your perception of their work and their process. As AIAS co-creator Leon Furze points out, now you can get away from wondering whether a student cheated with AI and instead ask, “can I make a valid judgement of this student’s capabilities [whether they have used AI or not]?”
And as argued by the authors of a new book titled The Opposite of Cheating: Teaching for Integrity in an Age of AI, this takes moral accusation out of the conversation, reduces the adversarial tone, and enables students to reflect on and hopefully learn from the experience. You can also decide together what actions should take place next, whether the student resubmits the assignment in accordance with your AI-usage guidance, whether they lose the opportunity to earn a grade for that assignment, or whatever else you and your student decide is appropriate in that situation.
Students want to know how to use AI well. “If we don’t teach students how to use AI responsibly, they’re going to be poorly equipped for the workforce,” said the third student panelist, a computer science major. Let’s update our assignments to facilitate meaningful learning of our content, the skills students need in an AI age, and ethical, integrous use of AI to equip them for the workplace they’ll enter. In doing so, we’re helping to develop engaged citizens who think critically about information they encounter. I can’t think of more important work to be engaged in at this moment in time.
Flower Darby is an associate director of the Teaching for Learning Center at the University of Missouri, a co-author of The Norton Guide to Equity-Minded Teaching (2023), and the lead author, with James M. Lang, of Small Teaching Online: Applying Learning Science in Online Classes (2019).
References
Bertram Gallant, Tricia, and David A. Rettinger. 2025. The Opposite of Cheating: Teaching for Integrity in the Age of AI. University of Oklahoma Press.
Nearly two-thirds of teachers utilized artificial intelligence this past school year, and weekly users saved almost six hours of work per week, according to a recently released Gallup survey. But 28% of teachers still oppose AI tools in the classroom.
The poll, published by the research firm and the Walton Family Foundation, includes perspectives from 2,232 U.S. public school teachers.
“[The results] reflect a keen understanding on the part of teachers that this is a technology that is here, and it’s here to stay,” said Zach Hrynowski, a Gallup research director. “It’s never going to mean that students are always going to be taught by artificial intelligence and teachers are going to take a backseat. But I do like that they’re testing the waters and seeing how they can start integrating it and augmenting their teaching activities rather than replacing them.”
At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.
A 2023 study from the RAND Corp. found the most common AI tools used by teachers include virtual learning platforms, like Google Classroom, and adaptive learning systems, like i-Ready or the Khan Academy. Educators also used chatbots, automated grading tools and lesson plan generators.
Most teachers who use AI tools say they help improve the quality of their work, according to the Gallup survey. About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.
Nearly 60% of teachers agreed that AI improves the accessibility of learning materials for students with disabilities. For example, some kids use text-to-speech devices or translators.
More teachers in the Gallup survey agreed on AI’s risks for students versus its opportunities. Roughly a third said students using AI tools weekly would increase their grades, motivation, preparation for jobs in the future and engagement in class. But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.
In 2023, the U.S. Department of Education published a report recommending the creation of standards to govern the use of AI.
“Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are well-aware of ‘teachable moments’ that a human teacher can address but are undetected or misunderstood by AI models,” the report said. “Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in ed tech.”
Hrynowski said teachers are seeking guidance from their schools about how they can use AI. While many are getting used to setting boundaries for their students, they don’t know in what capacity they can use AI tools to improve their jobs.
The survey found that 19% of teachers are employed at schools with an AI policy. During the 2024-25 school year, 68% of those surveyed said they didn’t receive training on how to use AI tools. Roughly half of them taught themselves how to use it.
“There aren’t very many buildings or districts that are giving really clear instructions, and we kind of see that hindering the adoption and use among both students and teachers,” Hrynowski said. “We probably need to start looking at having a more systematic approach to laying down the ground rules and establishing where you can, can’t, should or should not, use AI In the classroom.”
Disclosure: Walton Family Foundation provides financial support to The 74.
The unemployment rate for new college graduates has recently surged. Economists say businesses are now replacing entry-level jobs with artificial intelligence.
As the world of higher ed continues to evolve at lightning speed, many students are understandably feeling some pressure to keep up. And that’s having a significant impact on the way they’re operating day-to-day in and out of the classroom. Over the past few years, faculty have reported noticeable changes in student expectations, with 49% recently telling us that the need to adapt to those expectations is a top challenge.
So, what’s shifting, and how can faculty better adapt to meet their students where they are without going overboard? Let’s examine two examples of how needs and expectations are changing: AI use and deadline extension requests.
The line between responsible AI use and cheating is fuzzy for some students
Last year, 46% of those we surveyed in our annual Faces of Faculty report named combating cheating and plagiarism as a top challenge, down only slightly from 49% in 2023. And as AI becomes a bigger, more integral part of the higher ed experience, it’s growing increasingly difficult for many students to distinguish between responsible AI use and academic dishonesty.
Forty-two percent of faculty we surveyed say they see significant or severe ethical and legal risks associated with generative AI in education, with 82% of instructors expressing concern specifically about AI and academic integrity. While today’s students expect AI to play some kind of role in the learning process, many stand on shaky ground when it comes to applying it ethically in their coursework.
Many faculty have told us that they like to set clear expectations at the beginning of the semester/term around AI-use, either verbally or within their syllabus. By doing so, they can provide students with clear-cut guidance on how they should approach their coursework.
Faculty are finding that the more they know about AI, the better they can safeguard assignments from potential overreliance. One educator from Missouri told us, “I am learning more about Al and Al detection this year, and am making quite a few adjustments to my assignments so they are more personal and reflective, rather than Al-tempting assignments.”
Using anti-cheating software has become a very popular method for instructors, with many using online plagiarism detection tools like turnitin or “locked down” browsers.
“Spending a lot more time and effort identifying and using reliable plagiarism detection software, especially AI detectors.” – Faculty member
Extension requests: pushing deadlines and boundaries
Another example of changing student needs is the growing expectation from students that their extension requests will be granted. But this has left many instructors feeling overwhelmed, not only by the number of requests to keep track of, but by a rising uncertainty over which requests are based on legitimate reasons. This may very well be a contributing factor for 35% of faculty who cited perceived dishonesty and lack of accountability from students as a top driver of dissatisfaction in 2024.
An adjunct professor from Virginia told us, “I leaned into adapting to students’ expectations, but this became somewhat unmanageable when teaching multiple courses. I am also concerned with setting a precedent for future students in my courses if current students share that accommodations are easily given.”
How instructors are responding
Despite the challenges that this shift presents, many instructors are jumping in to accommodate extension requests from students, offering both patience and a generally high level of understanding. Faculty acknowledge that today’s students have a lot to contend with these days — from financial stressors to academic and social pressures — and they’re prepared to flex to those challenges.
“I became more flexible. I get annoyed by professors my age who ignore the fact that today’s students are under ten times the pressure we were when we were undergraduates. Some of these students are carrying a full course load while working two jobs.” – Other professor role/lecturer/course instructor, Ontario
While they’re empathic to students’ evolving needs, instructors are ready to set their own boundaries when necessary. One faculty member told us, “For the most part, I held firm in my deadlines. I did however increase the number of reminders I sent.”
Regardless of the approach, clear communication with students remains at the heart of how faculty are dealing with this particular shift. Another instructor said, “I look at the individual situation and adapt…I remind students to complete items early to avoid unexpected delays. If there is a technology issue, then I will extend if it is communicated timely.”
We’re happy to see our faculty skillfully weaving through these obstacles while remaining committed to adapting to new student expectations.
To get a full picture about what 1,355 surveyed U.S. and Canadian faculty had to say about changes in student expectations, read our 2024 Faces of Faculty report.