Tag: bot

  • 65 Percent of Students Use Gen AI Chat Bot Weekly

    65 Percent of Students Use Gen AI Chat Bot Weekly

    A recent study from Tyton Partners finds that while large numbers of higher education stakeholders are engaging with generative AI tools, they still show a strong preference for in-person instruction, human-led support and skills-based learning over other trends.

    “It’s re-norming,” said Catherine Shaw, managing director of Tyton Partners. “People are figuring out how to adjust to this innovation that supports all the stakeholders in the ecosystem. [Generative AI] can be beneficial to learners, it can be beneficial to faculty and it can be beneficial to solution providers.”

    Time for Class,” Tyton’s annual report on digital tools and student success, evaluated survey responses from students, administrators and faculty members over the past three years regarding generative AI and other innovations in higher education. 

    This year’s report highlighted the value of in-person learning and face-to-face engagement for student success, as well as the ways faculty and staff can leverage tech tools to enhance the student experience.

    Methodology

    “Time for Class” is a longitudinal study of digital learning in U.S. higher education. This year’s survey was conducted in spring 2025 and includes responses from 1,500 students, more than 1,500 instructors and over 300 administrators. The students surveyed attend two- and four-year colleges and include working students, parenting students and dually enrolled high school students.

    In addition to asking about generative AI use, the survey collected data about digital courseware, ebooks and inclusive access, as well as changes to digital accessibility compliance requirements.

    Getting a grip on AI: The rise of generative artificial intelligence tools has soured students’ and faculty members’ perspectives of education, with each group accusing the other of using AI to cheat. In spite of a growing marketplace for digital tools and AI-assisted alternatives, the study found that both students and instructors prefer to engage in person and with other humans.

    Just under two-thirds of faculty and one-third of students surveyed indicated that face-to-face courses were their preferred method of teaching and learning, respectively. Compared to 2023 data, 16 percent more instructors indicated they prefer face-to-face teaching, and 32 percent more students said they wanted to learn in person.

    At the same time, preference for fully online courses fell among faculty from 16 percent in 2023 to 14 percent in 2025; for students it dropped from 30 percent in 2023 to 12 percent in 2025.

    Tyton Partners

    Students were also less likely than a year ago to say they primarily turn to generative AI tools for help when they’re struggling in a course. A majority (84 percent) said they turn to people when they need help, while 17 percent said they use AI tools—a 13-percentage-point decrease from spring 2024 respondents.

    Researchers theorize this may be due to the difficulty students experience in prompting AI tools to help explain classroom concepts.

    “Understanding concepts, AI might not be the best for,” Shaw said. “Getting answers? AI might be able to help you with that. There’s a pretty striking difference there, and I think our learners are showing us they’re starting to understand that.”

    About one in three faculty members assume students are turning to AI tools for support. Twenty-nine percent of instructors think students prioritize help from generative AI, while 86 percent say they turn to people for help. Roughly two-thirds of students say they use a stand-alone generative AI tool like ChatGPT, and 30 percent say they use embedded courseware tools that incorporate generative AI.

    Instructors still lag in regular use of AI, with 30 percent of professors saying they use generative AI tools at least weekly, compared to 42 percent of students and 40 percent of administrators.

    The increased access to generative AI tools has not alleviated faculty workloads; half of faculty respondents said their workload has seen no change and 38 percent indicated AI is actually creating more work for them. The additional work includes monitoring cheating (71 percent) and creating assessments to counter student AI usage (61 percent). The only exception was among faculty who said they use generative AI tools very frequently or daily: One-third of those respondents said their workload has decreased.

    Immediately after the launch of ChatGPT, faculty and administrators at many institutions hurried to create policies about student use of generative AI and academic dishonesty. A May 2024 survey by Inside Higher Ed found that 31 percent of students said they weren’t clear on when they’re permitted to use generative AI in the classroom. As of spring 2025, only 28 percent of institutions had a formal policy on AI, while 32 percent said they’re still developing a policy, according to Tyton’s report.

    “Institutions are perhaps hesitant to set a central policy, because there’s so many ways this could be used to a student’s advantage and disadvantage, dependent on the field of study and the specific class, even,” Shaw said. “You want your guidance to be strong enough to be understood by everyone, but also with enough leeway that folks can feel free and have agency to modify as it makes sense for them.”

    While only 4 percent of administrators agreed that student literacy of generative AI is measured as a learning outcome at their institution currently, 39 percent indicated it will be in the next three years.

    The human element: Despite students’ reported interest in working with others, the faculty surveyed indicated that student engagement is low and academic dishonesty is on the rise.

    Among instructors who teach introductory or developmental courses, 45 percent said their primary classroom challenge is preventing students from cheating. An additional 44 percent said student attendance was their greatest concern.

    When asked what hinders students’ success in the classroom, 70 percent of instructors said they have ineffective study skills and 47 percent said they lack prerequisites for their course. Faculty also saw students’ personal challenges, such as feeling anxious or overwhelmed (48 percent) or lacking motivation (38 percent), as barriers to their success. Many students agreed with their professors’ assessment; 32 percent of first-year students and 28 percent of continuing students said they lack motivation in the classroom.

    The lack of motivation could be tied to a lack of career connections in their academics, particularly for students in introductory or general education courses, Shaw said. But this challenge could also motivate students to get in the classroom and engage with others so they don’t have to struggle alone, she added.

    “Perhaps the reason some students want more face-to-face interaction with their peers or with their instructor, it’s that feeling … of frustration or a lack of confidence … It’s easier when you are in person and you can see someone struggling,” Shaw said.

    Tyton’s survey asked faculty to rank different types of data they wish they had in their classroom to improve student outcomes, and the top response was “sentiment data” on students’ level of frustration or confidence (35 percent), followed by visibility into students’ grades in other courses during the term (23 percent). To Shaw, these responses suggest faculty are interested in seeing their students as whole people so they can better support them.

    Seeking stories from campus leaders, faculty members and staff for our Student Success focus. Share here.

    Source link

  • Chat Bot Passes College Engineering Class With Minimal Effort

    Chat Bot Passes College Engineering Class With Minimal Effort

    Since the release of ChatGPT in 2022, instructors have worried about how students might circumvent learning by utilizing the chat bot to complete homework and other assignments. Over the years, the large language model has enabled AI to expand its database and its ability to answer more complex questions, but can it replace a student’s efforts entirely?

    Graduate students at the University of Illinois at Urbana-Champaign’s college of engineering integrated a large language model into an undergraduate aerospace engineering course to evaluate its performance compared to the average student’s work.

    The researchers, Gokul Puthumanaillam and Melkior Ornik, found that ChatGPT earned a passing grade in the course without much prompt engineering, but the chat bot didn’t demonstrate understanding or comprehension of high-level concepts. Their work illustrating its capabilities and limitations was published on the open-access platform arXiv, operated by Cornell Tech.

    The background: LLMs can tackle a variety of tasks, including creative writing and technical analysis, prompting concerns over students’ academic integrity in higher education.

    A significant number of students admit to using generative artificial intelligence to complete their course assignments (and professors admit to using generative AI to give feedback, create course materials and grade academic work). According to a 2024 survey from Wiley, most students say it’s become easier to cheat, thanks to AI.

    Researchers sought to understand how a student investing minimal effort would perform in a course by offloading work to ChatGPT.

    The evaluated class, Aerospace Control Systems, which was offered in fall 2024, is a required junior-level course for aerospace engineering students. During the term, students submit approximately 115 deliverables, including homework problems, two midterm exams and three programming projects.

    “The course structure emphasizes progressive complexity in both theoretical understanding and practical application,” the research authors wrote in their paper.

    They copied and pasted questions or uploaded screenshots of questions into a free version of the chat bot without additional guidance, mimicking a student who is investing minimal time in their coursework.

    The results: At the end of the term, ChatGPT achieved a B grade (82.2 percent), slightly below the class average of 85 percent. But it didn’t excel at all assignment types.

    On practice problems, the LLM earned a 90.4 percent average (compared to the class average of 91.4 percent), performing the best on multiple-choice questions. ChatGPT received a higher exam average (89.7 percent) compared to the class (84.8 percent), but it faltered much more on the written sections than on the autograded components.

    ChatGPT demonstrated its worst performance in programming projects. While it had sound mathematical reasoning to theoretical questions, the model’s explanation was rigid and template-like, not adapting to the specific nuances of the problem, researchers wrote. It also created inefficient or overly complex solutions to programming, lacking “the optimization and robustness of considerations that characterize high-quality student submissions,” according to the article.

    The findings demonstrate that AI is capable of passing a rigorous undergraduate course, but that LLM systems can only accomplish pattern recognition rather than deep understanding. The results also indicated to researchers that well-designed coursework can evaluate students’ capabilities in engineering.

    So what? Based on their findings, researchers recommend faculty members integrate project work and open-ended design challenges to evaluate students’ understanding and technical capabilities, particularly in synthesizing information and making practical judgements.

    In the same vein, they suggested that faculty should design questions that evaluate human expertise by requiring students to explain their rationale or justify their response, rather than just arrive at the correct answer.

    ChatGPT was also unable to grasp system integration, robustness and optimization over basic implementation, so focusing on these requirements would provide better evaluation metrics.

    Researchers also noted that because ChatGPT is capable of answering practice problems, instruction should focus less on routine technical work and more on higher-level engineering concepts and problem-solving skills. “The challenge ahead lies not in preventing AI use, but in developing educational approaches that leverage these tools while continuing to cultivate genuine engineering expertise,” researchers wrote.

    Source link

  • Education Department mulls using AI chat bot for FAFSA help

    Education Department mulls using AI chat bot for FAFSA help

    The Education Department is considering terminating its contracts for thousands of call center employees hired to answer families’ questions about federal student aid, and may replace them with an artificial intelligence–powered chat bot, The New York Times reported Thursday.

    Elon Musk’s Department of Government Efficiency apparently suggested the move, the Times reported, as part of a broader effort to reduce federal spending—which has already led to dozens if not hundreds of layoffs at the Education Department and the cancellation of hundreds of millions of dollars in contracts at the Institute for Education Sciences.

    The call centers employ 1,625 people who answer more than 15,000 calls per day, according to an Education Department report. The department greatly increased staffing at their call centers after last year’s bungled launch of the new FAFSA led to an overwhelming influx of calls. 

    Last September, a Government Accountability Office investigation found that in the first five months of the rollout, three-quarters of calls went unanswered. Last summer, the department hired 700 new agents to staff the lines and had planned to add another 225 after the launch of the 2024–25 FAFSA in November.

    One of the helplines DOGE is closely scrutinizing, according to the Times, is operated by the consulting firm Accenture. Accenture also operates the studentaid.gov website, which houses the online platform for the Free Application for Federal Student Aid. The department’s contract with the firm expires Feb. 19. According to sources in the Education Department who spoke with Inside Higher Ed, the department is considering significant reductions to its Accenture contract ahead of its renewal.

    Source link