Category: AI

  • How is artificial intelligence actually being used in higher education?

    How is artificial intelligence actually being used in higher education?

    With a wide range of applications, including streamlining administrative tasks and tailoring learning experiences, AI is being used in innovative ways to enhance higher education.

    Course design and content preparation

    AI tools are changing the way academic staff approach course design and content preparation. By leveraging AI, lecturers can quickly generate comprehensive plans, create engaging sessions, and develop quizzes and assignments.

    For instance, tools like Blackboard Ultra can create detailed course plans and provide suggestions for content organisation and course layout. They can produce course materials in a fraction of the time it would traditionally take and suggest interactive elements that could increase student engagement.

    AI tools excel at aligning resources with learning outcomes and institutional policies. This not only saves time but also allows lecturers to focus more on delivering high-quality instruction and engaging with students.

    Enhancing learning experience

    AI and virtual reality (VR) scenarios and gamified environments are offering students unique, engaging learning experiences that go beyond traditional lectures. Tools like Bodyswaps use VR to simulate realistic scenarios for practicing soft and technical skills safely. These immersive and gamified environments enhance learning by engaging students in risk-free real-world challenges and provide instant feedback, helping them learn and adjust more effectively.

    Self-tailored learning

    AI also plays a role in supporting students to tailor learning materials to meet their individual and diverse needs. Tools like Jamworks can enhance student interaction with lecture content by converting recordings into organised notes and interactive study materials, such as flashcards.

    Similarly, Notebook LLM offers flexibility in how students engage with their courses by enabling them to generate content in their preferred form such as briefing documents, podcasts, or taking a more conversational approach. These tools empower students to take control of their learning processes, making education more aligned with their individual learning habits and preferences.

    Feedback and assessment

    Feedback and assessment is the most frequently referenced area when discussing how reductions in workload could be achieved with AI. Marking tools like Graide, Keath.ai, and Learnwise are changing this process by accelerating the marking phase. These tools leverage AI to deliver consistent and tailored feedback, providing students with clear, constructive insights to enhance their academic work. However, the adoption of AI in marking raises valid ethical concerns about its acceptability such as the lack of human judgement and whether AI can mark consistently and fairly.

    Supporting accessibility

    AI can play a crucial role in enhancing accessibility within educational environments, ensuring that learning materials are inclusive and accessible to all students. By integrating AI-driven tools such as automated captioning, and text-to-speech applications, universities can significantly improve the accessibility of digital resources.

    AI’s capability to tailor learning materials is particularly beneficial for students with diverse educational needs. It can reformat text, translate languages, and simplify complex information to make it more digestible. This ensures that all students, regardless of their learning abilities or language proficiency, have equal opportunities to access and understand educational content.

    Despite the benefits, the use of AI tools like Grammarly raises concerns about academic integrity. These tools have the potential to enhance or even alter students’ original work, which may lead to questions about the authenticity of their submissions. This issue highlights the need for clear guidelines and ethical considerations in the use of AI to support academic work without compromising integrity.

    Another significant issue is equity of access to these tools. Many of the most effective AI-driven accessibility tools are premium services, which may not be affordable for all students, potentially widening the digital divide.

    Student support – chatbots

    AI chatbots are increasingly recognised as valuable tools in the tertiary education sector, streamlining student support and significantly reducing staff workload. These increasingly sophisticated systems are adept at managing a wide array of student queries, from routine administrative questions to more detailed academic support, thereby allowing human resources to focus on tasks requiring more nuanced and personal interactions. They can be customised to meet the specific needs of a university, ensuring that they provide accurate and relevant information to students.

    Chatbots such as LearnWise are designed to enhance student interactions by providing more tailored and contextually aware responses. For instance, on a university’s website, if a student expresses interest in gaming, they can suggest relevant courses, highlight the available facilities and include extra curriculum activities available, integrating seamlessly with the student’s interests and academic goals. This level of tailoring enhances the interaction quality and improves the student experience.

    Administrative efficiency

    AI is positively impacting the way administrative tasks are handled within educational institutions, changing the way everyday processes are managed. By automating routine and time-consuming tasks, AI technologies can alleviate the administrative load on staff, allowing them to dedicate more time to strategic and student-focused activities.

    AI tools such as Coplot and Gemini can help staff draft, organise, and prioritise emails. These tools can suggest responses based on the content received, check the tone of emails and manage scheduling by integrating with calendar apps, and remind lecturers of pending tasks or follow-ups, enhancing efficiency within the institution.

    Staff frequently deal with extensive documentation, from student reports to research papers and institutional policies. AI tools can assist in checking, proofreading and summarising papers and reports, and can help with data analysis, generating insights, graphs and graphics to help make data more easily digestible.

    How is AI being used in your institution?

    At Jisc we are collating practical case studies to create a comprehensive overview of how AI is being used across tertiary education. This includes a wide range of examples supporting the effective integration of AI into teaching and administration which will be used to highlight best practice, support those just getting started with the use of AI, overcome challenges being faced across the sector and to highlight the opportunities available to all.

    We want to hear how AI is being used at your organisation, from enhancing everyday tasks to complex and creative use cases. You can explore these resources and find out how to contribute by visiting the Jisc AI Resource Hub.

    For more information around the use of digital and AI in tertiary education, sign up to receive on-demand access to key sessions from Jisc’s flagship teaching and learning event – Digifest running 11–12 March.

    Source link

  • AI Support for Teachers

    AI Support for Teachers

    Collaborative Classroom, a leading nonprofit publisher of K–12 instructional materials, announces the publication of SIPPS, a systematic decoding program. Now in a new fifth edition, this research-based program accelerates mastery of vital foundational reading skills for both new and striving readers.

    Twenty-Five Years of Transforming Literacy Outcomes

    “As educators, we know the ability to read proficiently is one of the strongest predictors of academic and life success,” said Kelly Stuart, President and CEO of Collaborative Classroom. “Third-party studies have proven the power of SIPPS. This program has a 25-year track record of transforming literacy outcomes for students of all ages, whether they are kindergarteners learning to read or high schoolers struggling with persistent gaps in their foundational skills.

    “By accelerating students’ mastery of foundational skills and empowering teachers with the tools and learning to deliver effective, evidence-aligned instruction, SIPPS makes a lasting impact.”

    What Makes SIPPS Effective?

    Aligned with the science of reading, SIPPS provides explicit, systematic instruction in phonological awareness, spelling-sound correspondences, and high-frequency words. 

    Through differentiated small-group instruction tailored to students’ specific needs, SIPPS ensures every student receives the necessary targeted support—making the most of every instructional minute—to achieve grade-level reading success.

    SIPPS is uniquely effective because it accelerates foundational skills through its mastery-based and small-group targeted instructional design,” said Linda Diamond, author of the Teaching Reading Sourcebook. “Grounded in the research on explicit instruction, SIPPS provides ample practice, active engagement, and frequent response opportunities, all validated as essential for initial learning and retention of learning.”

    Personalized, AI-Powered Teacher Support

    Educators using SIPPS Fifth Edition have access to a brand-new feature: immediate, personalized responses to their implementation questions with CC AI Assistant, a generative AI-powered chatbot.

    Exclusively trained on Collaborative Classroom’s intellectual content and proprietary program data, CC AI Assistant provides accurate, reliable information for educators.

    Other Key Features of SIPPS, Fifth Edition

    • Tailored Placement and Progress Assessments: A quick, 3–8 minute placement assessment ensures each student starts exactly at their point of instructional need. Ongoing assessments help monitor progress, adjust pacing, and support grouping decisions.
    • Differentiated Small-Group Instruction: SIPPS maximizes instructional time by focusing on small groups of students with similar needs, ensuring targeted, effective teaching.
    • Supportive of Multilingual Learners: Best practices in multilingual learner (ML) instruction and English language development strategies are integrated into the design of SIPPS.
    • Engaging and Effective for Older Readers: SIPPS Plus and SIPPS Challenge Level are specifically designed for students in grades 4–12, offering age-appropriate texts and instruction to close lingering foundational skill gaps.
    • Multimodal Supports: Integrated visual, auditory, and kinesthetic-tactile strategies help all learners, including multilingual students.
    • Flexible, Adaptable, and Easy to Teach: Highly supportive for teachers, tutors, and other adults working in classrooms and expanded learning settings, SIPPS is easy to implement well. A wraparound system of professional learning support ensures success for every implementer.

    Accelerating Reading Success for Students of All Ages

    In small-group settings, students actively engage in routines that reinforce phonics and decoding strategies, practice with aligned texts, and receive immediate feedback—all of which contribute to measurable gains.

    “With SIPPS, students get the tools needed to read, write, and understand text that’s tailored to their specific abilities,” said Desiree Torres, ENL teacher and 6th Grade Team Lead at Dr. Richard Izquierdo Health and Science Charter School in New York. “The boost to their self-esteem when we conference about their exam results is priceless. Each and every student improves with the SIPPS program.” 

    Kevin Hogan
    Latest posts by Kevin Hogan (see all)

    Source link

  • Use of AI in Industries and Organizations: 2025 – Sovorel

    Use of AI in Industries and Organizations: 2025 – Sovorel

    AI skills are now needed by all students in every field and organization. This document focuses on the top ten industries/organizations, explains how AI is already being used in those fields, and breaks down the AI skills and subskills needed by all students.

    AI Literacy is an imperative that all students need to develop in order to be more competitive and effective in the workforce, to enhance their own learning, gain greater access to information, improve their research capabilities, and be better citizens with resistance to deepfakes and digital propaganda. This isn’t a hyperbally or future concern; this is the reality of today. “Use of AI in Industries and Organizations: 2025” is an original document created by the Sovorel Center for Teaching & Learning and written by Director Brent A. Anders, PhD. Its purpose is to help all of academia see and understand the real need for AI Literacy and specific AI skills for all fields.

    PDF download: Use of AI in Industries and Organizations: 2025 (3.1MB)

    In an effort to help all of academia and the rest of the world, this document is licensed under a Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0) so that it can be fully used by others.

    Suggested Citation:

    Anders, B. (2025, March). Use of AI in industries and organizations: 2025. Sovorel Center for Teaching & Learning. https://sovorelpublishing.com/index.php/2025/03/02/use-of-ai-in-industries-and-organizations-2025

    Source link

  • 25 (Mostly AI) Sessions to Enjoy in 2025 – The 74

    25 (Mostly AI) Sessions to Enjoy in 2025 – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    South by Southwest Edu returns to Austin, Texas, running March 3-6. As always, it’ll offer a huge number of panels, discussions, film screenings, musical performances and workshops exploring education, innovation and the future of schooling.

    Keynote speakers this year include neuroscientist Anne-Laure Le Cunff, founder of Ness Labs, an online educational platform for knowledge workers; astronaut, author and TV host Emily Calandrelli, and Shamil Idriss, CEO of Search for Common Ground, an international non-profit. Idriss will speak about what it means to be strong in the face of opposition — and how to turn conflict into cooperation. Also featured: indy musical artist Jill Sobule, singing selections from her musical F*ck 7th Grade.

    As in 2024, artificial intelligence remains a major focus, with dozens of sessions exploring AI’s potential and pitfalls. But other topics are on tap as well, including sessions on playful learning, book bans and the benefits of prison journalism. 

    To help guide the way, we’ve scoured the schedule to highlight 25 of the most significant presenters, topics and panels: 

    Monday, March 3:

    11 a.m. — Ultimate Citizens Film Screening: A new independent film features a Seattle school counselor who builds a world-class Ultimate Frisbee team with a group of immigrant children at Hazel Wolf K-8 School. 

    11:30 a.m. — AI & the Skills-First Economy: Navigating Hype & Reality: Generative AI is accelerating the adoption of a skills-based economy, but many are skeptical about its value, impact and the pace of growth. Will AI spark meaningful change and a new economic order, or is it just another overhyped trend? Meena Naik of Jobs for the Future leads a discussion with Colorado Community College System Associate Vice Chancellor Michael Macklin, Nick Moore, an education advisor to Alabama Gov. Kay Ivey, and Best Buy’s Ryan Hanson.

    11:30 a.m. — Navigation & Guidance in the Age of AI: The Clayton Christensen Institute’s Julia Freeland Fisher headlines a panel that looks at how generative AI can help students access 24/7 help in navigating pathways to college. As new models take root, the panel will explore what entrepreneurs are learning about what students want from these systems. Will AI level the playing field or perpetuate inequality? 

    12:30 p.m. — Boosting Student Engagement Means Getting Serious About Play: New research shows students who are engaged in schoolwork not only do better in school but are happier and more confident in life. And educators say they’d be happier at work and less likely to leave the profession if students engaged more deeply. In this session, LEGO Education’s Bo Stjerne Thomsen will explore the science behind playful learning and how it can get students and teachers excited again.

    1:30 p.m. — The AI Sandbox: Building Your Own Future of Learning: Mike Yates of The Reinvention Lab at Teach for America leads an interactive session offering participants the chance to build their own AI tools to solve real problems they face at work, school or home. The session is for AI novices as well as those simply curious about how the technology works. Participants will get free access to Playlab.AI.

    2:30 p.m. — Journalism Training in Prison Teaches More Than Headlines: Join Charlotte West of Open Campus, Lawrence Bartley of The Marshall Project and Yukari Kane of the Prison Journalism Project to explore real-life stories from behind bars. Journalism training is transforming the lives of a few of the more than 1.9 million people incarcerated in the U.S., teaching skills from time management to communication and allowing inmates to feel connected to society while building job skills. 

    Tuesday, March 4:

    11:30 a.m. — Enough Talk! Let’s Play with AI: Amid the hand-wringing about what AI means for the future of education, there’s been little conversation about how a few smart educators are already employing it to shift possibilities for student engagement and classroom instruction. In this workshop, attendees will learn how to leverage promising practices emerging from research with real educators using AI in writing, creating their own chatbots and differentiating support plans. 

    12:30 p.m. — How Much is Too Much? Navigating AI Usage in the Classroom: AI-enabled tools can be helpful for students conducting research, outlining written work, or proofing and editing submissions. But there’s a fine line between using AI appropriately and taking advantage of it, leaving many students wondering, “How much AI is too much?” This session, led by Turnitin’s Annie Chechitelli, will discuss the rise of GenAI, its intersection with academia and academic integrity, and how to determine appropriate usage.  

    1 p.m. — AI & Edu: Sharing Real Classroom Successes & Challenges: Explore the real-world impact of AI in education during this interactive session hosted by Zhuo Chen, a text analysis instructor at the nonprofit education startup Constellate, and Dylan Ruediger of the research and consulting group Ithaka S+R. Chen and Ruediger will share successes and challenges in using AI to advance student learning, engagement and skills. 

    1 p.m. — Defending the Right to Read: Working Together: In 2025, authors face unprecedented challenges. This session, which features Scholastic editor and young adult novelist David Levithan, as well as Emily Kirkpatrick, executive director of the National Council of Teachers of English, will explore the battle for freedom of expression and the importance of defending reading in the face of censorship attempts and book bans.

    1 p.m. — Million Dollar Advice: Navigating the Workplace with Amy Poehler’s Top Execs: Kate Arend and Kim Lessing, the co-presidents of Amy Poehler’s production company Paper Kite Productions, will be live to record their workplace and career advice podcast “Million Dollar Advice.” The pair will tackle topics such as setting and maintaining boundaries, learning from Gen Z, dealing with complicated work dynamics, and more. They will also take live audience questions.

    4 p.m. — Community-Driven Approaches to Inclusive AI Education: With rising recognition of neurodivergent students, advocates say AI can revolutionize how schools support them by streamlining tasks, optimizing resources and enhancing personalized learning. In the process, schools can overcome challenges in mainstreaming students with learning differences. This panel features educators and advocates as well as Alex Kotran, co-founder and CEO of The AI Education Project.

    4 p.m. — How AI Makes Assessment More Actionable in Instruction: Assessments are often disruptive, cumbersome or disconnected from classroom learning. But a few advocates and developers say AI-powered assessment tools offer an easier, more streamlined way for students to demonstrate learning — and for educators to adapt instruction to meet their needs. This session, moderated by The 74’s Greg Toppo, features Khan Academy’s Kristen DiCerbo, Curriculum Associates’ Kristen Huff and Akisha Osei Sarfo, director of research at the Council of the Great City Schools.

    Wednesday, March 5:

    11 a.m. — Run, Hide, Fight: Growing Up Under the Gun Screening & Q&A: Gun violence is now the leading cause of death for American children and teens, according to the federal Centers for Disease Control and Prevention, yet coverage of gun violence’s impact on youth is usually reported by adults. Run, Hide, Fight: Growing Up Under the Gun is a 30-minute documentary by student journalists about how gun violence affects young Americans. Produced by PBS News Student Reporting Labs in collaboration with 14 student journalists in five cities, it centers the perspectives of young people who live their lives in the shadow of this threat. 

    11:30 a.m. — AI, Education & Real Classrooms: Educators are at the forefront of testing, using artificial intelligence and teaching their communities about it. In this interactive session, participants will hear from educators and ed tech specialists on the ground working to support the use of AI to improve learning. The session includes Stacie Johnson, director of professional learning at Khan Academy, and Dina Neyman, Khan Academy’s director of district success. 

    11:30 a.m. — The Future of Teaching in an Age of AI: As AI becomes increasingly present in the classroom, educators are understandably concerned about how it might disrupt their teaching. An expert panel featuring Jake Baskin, executive director of the Computer Science Teachers Association andKarim Meghji of Code.org, will look at how teaching will change in an age of AI, exploring frameworks for teaching AI skills and sharing best practices for integrating AI literacy across disciplines.

    2:30 p.m. — AI in Education: Preparing Gen A as the Creators of Tomorrow: Generation Alpha is the first to experience generative artificial intelligence from the start of their educational journeys. To thrive in a world featuring AI requires educators helping them tap into their natural creativity, navigating unique opportunities and challenges. In this session, a cross-industry panel of experts discuss strategies to integrate AI into learning, allowing critical thinking and curiosity to flourish while enabling early learners to become architects of AI, not just users.

    2:30 p.m. — The Ethical Use of AI in the Education of Black Children: Join a panel of educators, tech leaders and nonprofit officials as they discuss AI’s ethical complexities and its impact on the education of Black children. This panel will address historical disparities, biases in technology, and the critical need for ethical AI in education. It will also offer unique perspectives into the benefits and challenges of AI in Black children’s education, sharing best practices to promote the safe, ethical and legal use of AI in classrooms.

    2:30 p.m. — Exploring Teacher Morale State by State: Is teacher morale shaped by where teachers work? Find out as Education Week releases its annual State of Teaching survey. States and school districts drive how teachers are prepared, paid and promoted, and the findings will raise new questions about what leaders and policymakers should consider as they work to support an essential profession. The session features Holly Kurtz, director of EdWeek Research Center, Stephen Sawchuk, EdWeek assistant managing editor, and assistant editor Sarah D. Sparks.

    2:30 p.m. — From White Folks Who Teach in the Hood: Is This Conversation Against the Law Now? While most students in U.S. public schools are now young people of color, more than 80% of their teachers are white. How do white educators understand and address these dynamics? Join a live recording of a podcast that brings together white educators with Christopher Emdin and sam seidel, co-editors of From White Folks Who Teach in the Hood: Reflections on Race, Culture, and Identity (Beacon, 2024).

    3:30 p.m. — How Youth Use GenAI: Time to Rethink Plagiarism: Schools are locked in a battle with students over fears they’re using generative artificial intelligence to plagiarize existing work. In this session, join Elliott Hedman, a “customer obsession engineer” with mPath, who with colleagues and students co-designed a GenAI writing tool to reframe AI use. Hedman will share three strategies that not only prevent plagiarism but also teach students how to use GenAI more productively.  

    Thursday, March 6:

    10 a.m. — AI & the Future of Education: Join futurists Sinead Bovell and Natalie Monbiot for a fireside discussion about how we prepare kids for a future we cannot yet see but know will be radically transformed by technology. Bovell and Monbiot will discuss the impact of artificial intelligence on our world and the workforce, as well as its implications for education. 

    10 a.m. — Reimagining Everyday Places as Early Learning Hubs: Young children spend 80% of their time outside of school, but too many lack access to experiences that encourage learning through hands-on activities and play. While these opportunities exist in middle-class and upper-income neighborhoods, they’re often inaccessible to families in low-income communities. In this session, a panel of designers and educators featuring Sarah Lytle, who leads the Playful Learning Landscapes Action Network, will look at how communities are transforming overlooked spaces such as sidewalks, shelters and even jails into nurturing learning environments accessible to all kids.

    11 a.m. — Build-a-Bot Workshop: Make Your Own AI to Make Sense of AI: In this session, participants will build an AI chatbot alongside designers and engineers from Stanford University and Stanford’s d.school, getting to the core of how AI works. Participants will conceptualize, outline and create conversation flows for their own AI assistant and explore methods that technical teams use to infuse warmth and adaptability into interactions and develop reliable chatbots.  

    11:30 a.m. — Responsible AI: Balancing Innovation, Impact, & Ethics: In this session, participants will learn how educators, technologists and policymakers work to develop AI responsibly. Panelists include Isabelle Hau of the Stanford Accelerator for Learning, Amelia Kelly, chief technology officer of the Irish AI startup SoapBox Labs, and Latha Ramanan of the AI developer Merlyn Mind. They’ll talk about how policymakers and educators can work with developers to ensure transparency and accuracy of AI tools. 


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)

    Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


    This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

    The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.

    Starting the case.

    The coverage reports that:

    all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 

    Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.

    I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.

    Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.

    Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.

    Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.

    Denied it.

    The article continues that Yang, the student:

    denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 

    Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.

    Does anyone actually read those things?

    Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”

    With friends like these.

    Also interesting is that, according to the coverage:

    His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 

    That would be a fair point except that the next paragraph is:

    Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 

    That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.

    That guy says:

    “I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”

    Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.

    As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.

    You know what I am going to say.

    To me, that entire Dowd diversion is mostly funny.

    More evidence.

    Back on track, we get even more detail, such as that the exam in question was:

    an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 

    Exam graders argued the AI use was obvious enough. Yang disagrees. 

    Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 

    So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.

    I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.

    Then there’s even more evidence:

    Yang also objects to professors using AI detection software to make their case at the November hearing.  

    He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 

    “Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  

    First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.

    A pattern.

    Then we get this, buried in the news coverage:

    Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  

    He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 

    In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.

    She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 

    I’m sorry — what now?

    Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:

    “re write it, make it more casual, like a foreign student write but no ai.” 

    He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?

    For that one where he left in the prompts to ChatGPT:

    the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 

    Yang was warned, in writing.

    If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.

    On the other hand, he did say he did not do it.

    Findings, review.

    Further:

    But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.

    Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:

    “PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 

    Slow clap.

    And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.

    Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.

    MPR, spare me.

    Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.

    Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:

    Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 

    “As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 

    We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.

    That’s just bad journalism.

    And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:

    it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 

    Absolute joke.

    I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.

    MPR also quotes a student who says:

    she and many other students live in fear of AI detection software.  

    “AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 

    Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.

    Two final bits.

    The MPR story shares:

    In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 

    Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.

    The MPR article wraps up that Yang:

    found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career

    Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.

    And finally:

    “Probably I should think to do something, selling potatoes on the streets or something else,” he said. 

    Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.

    Source link

  • Building and Sustaining an AI-informed Institution

    Building and Sustaining an AI-informed Institution

    Title: Navigating Artificial Intelligence in Postsecondary Education: Building Capacity for the Road Ahead

    Source: Office of Educational Technology, U.S. Department of Education

    As a response to the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the Department of Education’s new brief, Navigating Artificial Intelligence in Postsecondary Education, provides recommendations for leaders at higher education institutions. The brief is divided into two main parts: one with policy recommendations and one reviewing literature and research.

    The report outlines five recommendations:

    Develop clear policies for the use of AI in postsecondary settings. The use of AI can be vast, from admissions to enrollment to other decision-making processes. It is important, though, to ensure that AI is not reifying bias. Stakeholders should consider the potential utility of an AI Bill of Rights or the National Institute of Standards and Technology’s AI Risk Management Framework in shaping policies for their campuses. They should also consider affirmative consent and disclosure policies as they relate to AI, as well as inscribing characteristics that make AI trustworthy.

    Generate infrastructure that supports the use of AI in pedagogy, student support, and data tracking. Incentivizing cross-department collaboration and faculty involvement in the development of AI tools is key. It is also important to integrate social and behavioral science research into evaluation of AI.

    Continually assess AI tools. This includes testing equity and accounting for any bias. AI should continuously go through a feedback loop. Institutions need to strategize in ensuring a balance of human supervision. Additionally, evaluations should be comprehensive and from diverse stakeholders.

    Collaborate with partners for the development and testing of AI across different educational uses. Leaders are tasked with finding and building relationships with partners. These partnerships should aim to ensure best practices and promote equitable AI.

    Programs should grow and develop alongside the job market’s increased demand for AI. Leaders must consider how to keep up with the evolving demand for AI, as well as how to integrate across all disciplines.

    Click here for the full report.

    —Kara Seidel


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • A higher education institution’s relationship with technology crosses all its missions

    A higher education institution’s relationship with technology crosses all its missions

    Universities have a critical role to play at the intersection of academic thought, organisational practice, and social benefits of technology.

    It’s easy when thinking about universities’ digital strategies to see that as a technical question of organisational capability and solutions rather than one part of the wider public role universities have in leading thinking and shaping practice for the benefit of society.

    But for universities the relationship with technology is multifaceted: some parts of the institution are engaged in driving forward technological developments; others may be critically assessing how those developments reshape the human experience and throw up ethical challenges that must be addressed; while others may be seeking to deploy technologies in the service of improving teaching and research. The question, then, for universities, must be how to bring these relationships together in a critical but productive way.

    Thinking into practice

    The University of Edinburgh hosts one of the country’s foremost informatics and computer science departments, one of the largest centres of AI research in Europe. Edinburgh’s computing infrastructure has lately hit headlines when the Westminster government decided to cancel planned investment in a new supercomputing facility at the university, only to announce new plans for supercomputing investment in last week’s AI opportunities action plan, location as yet undetermined.

    But while the university’s technological research prowess is evident, there’s also a strong academic tradition of critical thought around technology – such as in the work of philosopher Shannon Vallor, director of the Centre for Technomoral Futures at the Edinburgh Futures Institute and author of The AI Mirror. In the HE-specific research field, Janja Komljenovic has explored the phenomenon of the “datafication” of higher education, raising questions of a mismatch and incoherence between how data is valued and used in different parts of an institution.

    When I speak to Edinburgh’s principal Peter Mathieson ahead of his keynote at the upcoming Kortext Live leaders event in Edinburgh on 4 February he’s reflecting on a key challenge: how to continue a legacy of thought leadership on digital technology and data science into the future, especially when the pace of technological change is so rapid?

    “It’s imperative for universities to be places that shape the debate, but also that study the advantages and disadvantages of different technologies and how they are adopted. We need to help the public make the best use of technology,” says Peter.

    There’s work going on to mobilise knowledge across disciplines, for example, data scientists interrogating Scotland’s unique identifier data to gain insights on public health – which was particularly important during Covid. The university is a lead partner in the delivery of the Edinburgh and south east Scotland city region deal, a key strand of which is focused on data-driven innovation. “The city region deal builds on our heritage of excellence in AI and computer science and brings that to addressing the exam question of how to create growth in our region, attract inward investment, and create jobs,” explains Peter.

    Peter is also of the opinion that more could be done to bring university expertise to bear across the education system. Currently the university is working with a secondary school to develop a data science programme that will see secondary pupils graduate with a data science qualification. Another initiative sees primary school classrooms equipped with sensors that detect earth movements in different parts of the world – Peter recounts having been proudly shown a squiggle on a piece of paper by two primary school pupils, which turned out to denote an earthquake in Tonga.

    “Data education in schools is a really important function for universities,” he says.”It’s not a recruiting exercise – I see it as a way of the region and community benefiting from having a research intensive university in their midst.”

    Connecting the bits

    The elephant in the room is, of course, the link between academic knowledge and organisational practice, and where and how those come together in a university as large and decentralised as Edinburgh.

    “There is a distinction between the academic mission and the day to day nuts and bolts,” Peter admits. “There is some irony that we are one of finest computer science institutions but we had trouble installing our new finance system. But the capability we have in a place like this should allow us to feel positive about the opportunities to do interesting things with technology.”

    Peter points to the university-wide enablement of Internet of Things which allows the university to monitor building usage, and which helps to identify where buildings may be under-utilised. As principal Peter also brought together estates and digital infrastructure business planning so that the physical and digital estate can be developed in tandem and with reference to each other rather than remaining in silos.

    “Being able to make decisions based on data is very empowering,” he says. “But it’s important that we think very carefully about what data is anonymised and reassure people we are not trying to operate a surveillance system.” Peter is also interested in how AI could help to streamline large administrative tasks, and the experimental deployment of generative AI across university activity. The university has developed its own AI innovation platform, ELM, the Edinburgh (access to) Language Models, which is free to use for all staff and students, and which gives the user access to large language models including the latest version of Chat-GPT but, importantly, without sharing user data with OpenAI.

    At the leadership level, Peter has endeavoured to put professional service leaders on the same footing as academic leaders rather than, as he says, “defining professional services by what they are not, ie non-academic.” It’s one example of the ways that roles and structures in universities are evolving, not necessarily as a direct response to technological change, but with technology being one of the aspects of social change that create a need inside universities for the ability to look at challenges from a range of professional perspectives.

    It’s rarely as straightforward as “automation leading to staffing reductions” though Peter is alive to the perceived risks and their implications. “People worry about automation leading to loss of jobs, but I think jobs will evolve in universities as they will elsewhere in society,” he says. “Much of the value of the university experience is defined by the human interactions that take place, especially in an international university, and we can’t replace physical presence on campus. I’m optimistic that humans can get more good than harm out of AI – we just need to be mindful that we will need to adapt more quickly to this innovation than to earlier technological advances like the printing press, or the Internet.”

    This article is published in association with Kortext. Peter Mathieson will be giving a keynote address at the upcoming Kortext LIVE leaders’ event in Edinburgh on 4 February – join us there or at the the London or Manchester events on 29 January and 6 February to find out more about Wonkhe and Kortext’s work on leading digital capability for learning, teaching and student success, and be part of the conversation.

    Source link

  • How AI Has Gone To The Dogs

    How AI Has Gone To The Dogs

    One highlight from FETC’s Startup Pavilion is Florida-based Scholar Education, which uses AI chatbot dogs to help tutor students and give feedback to teachers. How it works: A friendly AI-powered classroom assistant provides academic guidance and encourages engagement. The AI dogs will deliver daily reports to parents so they can see feedback on their kids’ learning, creating a direct line of communication between home and school. See it in action for yourself:

    Kevin Hogan
    Latest posts by Kevin Hogan (see all)

    Want to share a great resource? Let us know at [email protected].

    Source link

  • AI Literacy Resource for All – Sovorel

    AI Literacy Resource for All – Sovorel

    There is no longer any way to deny that AI Literacy is a must for all people. Regardless of whether you are a student, faculty, young, or old, all of us must continually develop our AI Literacy to effectively function and excel in our AI-infused world. The importance of everyone developing their AI Literacy has been expressed by virtually all nations and international organizations (UN, 2024; UN, 2024b). Additionally, many business organizations have expressed that in order to be competitive in the workforce, AI Literacy is now an imperative employment skill (Marr, 2024).

    The following Sovorel video and infographic (in addition to the above infographic) provide key components of AI Literacy and specifics regarding prompt engineering and using an advanced prompt formula:

    AI Literacy: Prompt Engineering, Advanced Prompt Formula Infographic (this infographic, the main AI Literacy infographic, and many more are also available within the infographics section: https://sovorelpublishing.com/index.php/infographics)

     

    References

    Cisco. (2024, July 31). AI and the workforce: Industry report calls for reskilling and upskilling as 92 percent of technology roles evolve. Cisco. https://investor.cisco.com/news/news-details/2024/AI-and-the-Workforce-Industry-Report-Calls-for-Reskilling-and-Upskilling-as-92-Percent-of-Technology-Roles-Evolve/default.aspx

    Marr, B. (2024, October 24). The 5 most in-demand skills in 2025. Forbes. https://www.forbes.com/sites/bernardmarr/2024/10/14/the-5-most-in-demand-skills-in-2025/

    UN. (2024). Addendum on AI and Digital Government. United Nations. https://desapublications.un.org/sites/default/files/publications/2024-10/Addendum%20on%20AI%20and%20Digital%20Government%20%20E-Government%20Survey%202024.pdf

    UN. (2024b). Governing AI for humanity. United Nations. https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf

    Source link