Category: AI

  • What should the higher education sector do about AI fatigue?

    What should the higher education sector do about AI fatigue?

    Raise the topic of AI in education for discussion these days and you can feel the collective groan in the room.

    Sometimes I even hear it. We’re tired, I get it. Many students are too. But if we don’t keep working creatively to address the disruption to education posed by AI – if we just wait and see how it plays out – it will be too late.

    AI fatigue is many things

    There are a few factors at play, from an AI literacy divide, to simply talking past each other.

    AI literacy is nearly unmanageable. The complexity of AI in education, exacerbated by the pace of technological change, makes AI “literacy” very difficult to define, let alone attain. Educators represent a wide range of experience levels and conceptual frames, as well as differing opinions on the power, quality, opportunity, and risk of generative AI.

    One person will see AI as a radical first step in an intelligence revolution; the next will dismiss it as “mostly rubbish” and minimise the value discussing it at all. And, as far as I have found, there is no leading definition of AI literacy to date. Some people don’t even like the term literacy.

    Our different conceptual frames compete with each other. Many disciplines and conceptual orientations are trying to talk together, each with their own assumptions and incentives. In any given space, we have the collision of expert with novice, entrepreneur with critic, sceptic with optimist, reductionist with holist… and the list goes on.

    We tend to silo and specialise. Because it is difficult to become comprehensively literate in generative AI (and its related issues), many adopt a narrow focus and stick with that: assessment design, academic integrity, authorship, cognitive offloading, energy consumption, bias, labour ethics, and others. Meetings take on the character of debates. At the very least, discussions of AI are time-consuming, as each focus seems to need airing every day.

    We feel grief for what we may be losing: human authorship, agency, status, and a whole range of normative relational behaviours. A colleague recently told me how sad she feels marking student work. Authorship, for example, is losing coherence as a category or shared value, which can be surreal and dispiriting for both writers and readers. AI’s disruption brings a deeply challenging emotional experience that’s rarely discussed.

    We are under-resourced. Institutions have been slow to roll out policy, form working groups, provide training, or fund staff time to research, prepare, plan, and design responses. It’s a daunting task to just keep up with, let alone get ahead of, Silicon Valley. Unfortunately, the burden is largely borne by individuals.

    The AI elephant in the room

    Much of the sector suffers from the wishful thinking that AI is “mostly rubbish”, not likely to change things much, or simply an annoyance. Many educators haven’t thought through how AI technologies may lead our societies and our education systems to change radically and quickly, and that these changes may impact the psychology of learning and teaching, not to mention the entire infrastructure of education. We talk past each other.

    Silicon Valley is openly pursuing artificial general intelligence (AGI), or something like that. Imagine a ChatGPT that can do your job, my job, and a big piece of the knowledge-work jobs recent graduates may hope to enter. Some insiders think this could arrive by 2027.

    A few weeks ago, Dario Amodai, CEO of AI company Anthropic, wrote his prediction that 50 per cent of entry-level office jobs could vanish within the next couple of years, and that unemployment overall could hit 20 per cent. This could be mostly hype or confirmation bias among the tech elite. But IBM, Klarna, and Duolingo have already cited AI-linked efficiencies in recent layoffs.

    Whether these changes take two years, or five, or even ten, it’s on the radar. So, let’s pause and imagine it. What happens to a generation of young people who perceive increasing job scarcity, and options and social purpose?

    Set aside, for now, what this means for cities, mental health, or the social fabric. What does it mean for higher education – especially if a university degree no longer holds the value it once promised? How should HE respond?

    Responding humanely

    I propose we respond with compassion, humanity… and something like a plan. What does this look like? Let me suggest a few possibilities.

    The sector works together. Imagine this: a consortium of institutions gathers together a resource base and discussion space (not social media) for AI in education. It respects diversity of positions and conceptual frames but also aims for a coherent and pragmatic working ethos that helps institutions and individuals make decisions. It drafts a change management plan for the sector, embracing adaptive management to create frameworks to support institutions to respond quickly, intelligently, flexibly, and humanely to the instability. It won’t resolve all the mess into a coherent solution, but it could provide a more stable framework for change. And lift the burden on thousands of us who feel we are reinventing the wheel every day.

    Institutions take action. Leading institutions embrace big discussions around the future of society, work, and education. They show a staunch willingness to face the risks and opportunities ahead, they devote resources to the project, and they take actions that support both staff and students to navigate change thoughtfully.

    Individuals and small groups are empowered to respond creatively. Supported by the sector and their HEIs, they collaborate to keep each other motivated, check each other on the hype, and find creative new avenues for teaching and learning. We solve problems for today while holding space for the messy discussions, speculate on future developments, and experiment with education in a changing world.

    So sector leaders, please help us find some degree of convergence or coherence; institutions, please take action to resource and support your staff and students; and individuals, let’s work together to do something good.

    With leadership, action, and creative collaboration, we may just find the time and energy to build new language and vision for the strange landscape we have entered, to experiment safely with new models of knowledge creation and authorship, and to discover new capacities for self-knowledge and human value.

    So groan, yes – I groan with you. And breathe – I’ll go along with that too. And then, let’s see what we can build.

    Source link

  • Careers services can help students avoid making decisions based on AI fears

    Careers services can help students avoid making decisions based on AI fears

    How students use AI tools to improve their chances of landing a job has been central to the debate around AI and career advice and guidance. But there has been little discussion about AI’s impact on students’ decision making about which jobs and sectors they might enter.

    Jisc has recently published two studies that shine light on this area. Prospects at Jisc’s Early Careers Survey is an annual report that charts the career aspirations and experiences of more than 4,000 students and graduates over the previous 12 months. For the first time, the survey’s dominant theme was the normalisation of the use of AI tools and the influence that discourse around AI is having on career decision making. And the impact of AI on employability was also a major concern of Jisc’s Student Perceptions of AI Report 2025, based on in-depth discussions with over 170 students across FE and HE.

    Nerves jangling

    The rapid advancements in AI raise concerns about its long-term impact, the jobs it might affect, and the skills needed to compete in a jobs market shaped by AI. These uncertainties can leave students and graduates feeling anxious and unsure about their future career prospects.

    Important career decisions are already being made based on perceptions of how AI may change work. The Early Careers Survey found that one in ten students had already changed their career path because of AI.

    Plans were mainly altered because students feared that their chosen career was at risk of automation, anticipating fewer roles in certain areas and some jobs becoming phased out entirely. Areas such as coding, graphic design, legal, data science, film and art were frequently mentioned, with creative jobs seen as more likely to become obsolete.

    However, it is important not to carried away on a wave of pessimism. Respondents were also pivoting to future-proof their careers. Many students see huge potential in AI, opting for careers that make use of the new technology or those that AI has helped create.

    But whether students see AI as an opportunity or a threat, the role of university careers and employability teams is the same in both cases. How do we support students in making informed decisions that are right for them?

    From static to electricity

    In today’s AI-driven landscape, careers services must evolve to meet a new kind of uncertainty. Unlike previous transitions, students now face automation anxiety, career paralysis, and fears of job displacement. This demands a shift away from static, one-size-fits-all advice toward more personalised, future-focused guidance.

    What’s different is the speed and complexity of change. Students are not only reacting to perceived risks but also actively exploring AI-enhanced roles. Careers practitioners should respond by embedding AI literacy, encouraging critical evaluation of AI-generated advice, and collaborating with employers to help students understand the evolving world of work.

    Equity must remain central. Not all students have equal access to digital tools or confidence in using them. Guidance must be inclusive, accessible, and responsive to diverse needs and aspirations.

    Calls to action should involve supporting students in developing adaptability, digital fluency, and human-centred skills like creativity and communication. Promote exploration over avoidance, and values-based decision-making over fear, helping students align career choices with what matters most to them.

    Ultimately, careers professionals are not here to predict the future, but to empower all students and early career professionals to shape it with confidence, curiosity, and resilience.

    On the balance beam

    This isn’t the first time that university employability teams have had to support students through change, anxiety, uncertainty or even decision paralysis when it comes to career planning, but the driver is certainly new. Through this uncertainty and transition, students and graduates need guidance from everyone who supports them, in education and the workplace.

    Collaborating with industry leaders and employers is key to ensuring students understand the AI-enhanced labour market, the way work is changing and that relevant skills are developed. Embedding AI literacy in the curriculum helps students develop familiarity and understand the opportunities as well as limitations. Jisc has launched an AI Literacy Curriculum for Teaching and Learning Staff to support this process.

    And promoting a balanced approach to career research and planning is important. The Early Careers Survey found almost a fifth of respondents are using generative AI tools like ChatGPT and Microsoft Copilot as a source of careers advice, and the majority (84 per cent) found them helpful.

    While careers and employability staff welcome the greater reach and impact AI enables, particularly in challenging times for the HE sector, colleagues at an AGCAS event were clear to emphasise the continued necessity for human connection, describing AI as “augmenting our service, not replacing it.”

    We need to ensure that students understand how to use AI tools effectively, spot when the information provided is outdated or incorrect, and combine them with other resources to ensure they get a balanced and fully rounded picture.

    Face-to-face interaction – with educators, employers and careers professionals – provides context and personalised feedback and discussion. A focus on developing essential human skills such as creativity, critical thinking and communication remains central to learning. After all, AI doesn’t just stand for artificial intelligence. It also means authentic interaction, the foundation upon which the employability experience is built.

    Guiding students through AI-driven change requires balanced, informed career planning. Careers services should embed AI literacy, collaborate with employers, and increase face-to-face support that builds human skills like creativity and communication. Less emphasis should be placed on one-size-fits-all advice and static labour market forecasting. Instead, the focus should be on active, student-centred approaches. Authentic interaction remains key to helping students navigate uncertainty with confidence and clarity.

    Source link

  • More Than Half the States Have Issued AI Guidance for Schools – The 74

    More Than Half the States Have Issued AI Guidance for Schools – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Agencies in at least 28 states and the District of Columbia have issued guidance on the use of artificial intelligence in K-12 schools.

    More than half of the states have created school policies to define artificial intelligence, develop best practices for using AI systems and more, according to a report from AI for Education, an advocacy group that provides AI literacy training for educators.

    Despite efforts by the Trump administration to loosen federal and state AI rules in hopes of boosting innovation, teachers and students need a lot of state-level guidance for navigating the fast-moving technology, said Amanda Bickerstaff, the CEO and co-founder of AI for Education.

    “What most people think about when it comes to AI adoption in the schools is academic integrity,” she said. “One of the biggest concerns that we’ve seen — and one of the reasons why there’s been a push towards AI guidance, both at the district and state level — is to provide some safety guidelines around responsible use and to create opportunities for people to know what is appropriate.”

    North Carolina, which last year became one of the first states to issue AI guidance for schools, set out to study and define generative artificial intelligence for potential uses in the classroom. The policy also includes resources for students and teachers interested in learning how to interact with AI models successfully.

    In addition to classroom guidance, some states emphasize ethical considerations for certain AI models. Following Georgia’s initial framework in January, the state shared additional guidance in June outlining ethical principles educators should consider before adopting the technology.

    This year, Maine, Missouri, Nevada and New Mexico also released guidelines for AI in schools.

    In the absence of regulations at the federal level, states are filling a critical gap, said Maddy Dwyer, a policy analyst for the Equity in Civic Technology team at the Center for Democracy & Technology, a nonprofit working to advance civil rights in the digital age.

    While most state AI guidance for schools focuses on the potential benefits, risks and need for human oversight, Dwyer wrote in a recent blog post that many of the frameworks are missing out on critical AI topics, such as community engagement and deepfakes, or manipulated photos and videos.

    “I think that states being able to fill the gap that is currently there is a critical piece to making sure that the use of AI is serving kids and their needs, and enhancing their educational experiences rather than detracting from them,” she said.

    Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    I see many students viewing artificial intelligence as humanlike simply because it can write essays, do complex math or answer questions. AI can mimic human behavior but lacks meaningful engagement with the world.

    This disconnect inspired my course “Art and Generative AI,” which was shaped by the ideas of 20th-century German philosopher Martin Heidegger. His work highlights how we are deeply connected and present in the world. We find meaning through action, care and relationships. Human creativity and mastery come from this intuitive connection with the world. Modern AI, by contrast, simulates intelligence by processing symbols and patterns without understanding or care.

    In this course, we reject the illusion that machines fully master everything and put student expression first. In doing so, we value uncertainty, mistakes and imperfection as essential to the creative process.

    This vision expands beyond the classroom. In the 2025-26 academic year, the course will include a new community-based learning collaboration with Atlanta’s art communities. Local artists will co-teach with me to integrate artistic practice and AI.

    The course builds on my 2018 class, Art and Geometry, which I co-taught with local artists. The course explored Picasso’s cubism, which depicted reality as fractured from multiple perspectives; it also looked at Einstein’s relativity, the idea that time and space are not absolute and distinct but part of the same fabric.

    What does the course explore?

    We begin with exploring the first mathematical model of a neuron, the perceptron. Then, we study the Hopfield network, which mimics how our brain can remember a song from just listening to a few notes by filling in the rest. Next, we look at Hinton’s Boltzmann Machine, a generative model that can also imagine and create new, similar songs. Finally, we study today’s deep neural networks and transformers, AI models that mimic how the brain learns to recognize images, speech or text. Transformers are especially well suited for understanding sentences and conversations, and they power technologies such as ChatGPT.

    In addition to AI, we integrate artistic practice into the coursework. This approach broadens students’ perspectives on science and engineering through the lens of an artist. The first offering of the course in spring 2025 was co-taught with Mark Leibert, an artist and professor of the practice at Georgia Tech. His expertise is in art, AI and digital technologies. He taught students fundamentals of various artistic media, including charcoal drawing and oil painting. Students used these principles to create art using AI ethically and creatively. They critically examined the source of training data and ensured that their work respects authorship and originality.

    Students also learn to record brain activity using electroencephalography – EEG – headsets. Through AI models, they then learn to transform neural signals into music, images and storytelling. This work inspired performances where dancers improvised in response to AI-generated music.

    The Improv AI performance at Georgia Institute of Technology on April 15, 2025. Dancers improvised to music generated by AI from brain waves and sonified black hole data.

    Why is this course relevant now?

    AI entered our lives so rapidly that many people don’t fully grasp how it works, why it works, when it fails or what its mission is.

    In creating this course, the aim is to empower students by filling that gap. Whether they are new to AI or not, the goal is to make its inner algorithms clear, approachable and honest. We focus on what these tools actually do and how they can go wrong.

    We place students and their creativity first. We reject the illusion of a perfect machine, but we provoke the AI algorithm to confuse and hallucinate, when it generates inaccurate or nonsensical responses. To do so, we deliberately use a small dataset, reduce the model size or limit training. It’s in these flawed states of AI that students step in as conscious co-creators. The students are the missing algorithm that takes back control of the creative process. Their creations do not obey AI but reimagine it by the human hand. The artwork is rescued from automation.

    What’s a critical lesson from the course?

    Students learn to recognize AI’s limitations and harness its failures to reclaim creative authorship. The artwork isn’t generated by AI, but it’s reimagined by students.

    Students learn chatbot queries have an environmental cost because large AI models use a lot of power. They avoid unnecessary iterations when designing prompts or using AI. This helps reducing carbon emissions.

    The Improv AI performance on April 15, 2025, featured dancer Bekah Crosby responding to AI-generated music from brain waves.

    The course prepares students to think like artists. Through abstraction and imagination they gain the confidence to tackle the engineering challenges of the 21st century. These include protecting the environment, building resilient cities and improving health.

    Students also realize that while AI has vast engineering and scientific applications, ethical implementation is crucial. Understanding the type and quality of training data that AI uses is essential. Without it, AI systems risk producing biased or flawed predictions.

    Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link

  • 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74

    60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Nearly two-thirds of teachers utilized artificial intelligence this past school year, and weekly users saved almost six hours of work per week, according to a recently released Gallup survey. But 28% of teachers still oppose AI tools in the classroom.

    The poll, published by the research firm and the Walton Family Foundation, includes perspectives from 2,232 U.S. public school teachers.

    “[The results] reflect a keen understanding on the part of teachers that this is a technology that is here, and it’s here to stay,” said Zach Hrynowski, a Gallup research director. “It’s never going to mean that students are always going to be taught by artificial intelligence and teachers are going to take a backseat. But I do like that they’re testing the waters and seeing how they can start integrating it and augmenting their teaching activities rather than replacing them.”

    At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.

    A 2023 study from the RAND Corp. found the most common AI tools used by teachers include virtual learning platforms, like Google Classroom, and adaptive learning systems, like i-Ready or the Khan Academy. Educators also used chatbots, automated grading tools and lesson plan generators.

    Most teachers who use AI tools say they help improve the quality of their work, according to the Gallup survey. About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.

    Nearly 60% of teachers agreed that AI improves the accessibility of learning materials for students with disabilities. For example, some kids use text-to-speech devices or translators.

    More teachers in the Gallup survey agreed on AI’s risks for students versus its opportunities. Roughly a third said students using AI tools weekly would increase their grades, motivation, preparation for jobs in the future and engagement in class. But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.

    In 2023, the U.S. Department of Education published a report recommending the creation of standards to govern the use of AI.

    “Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are well-aware of ‘teachable moments’ that a human teacher can address but are undetected or misunderstood by AI models,” the report said. “Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in ed tech.”

    Researchers have found that AI education tools can be incorrect and biased — even scoring academic assignments lower for Asian students than for classmates of any other race.

    Hrynowski said teachers are seeking guidance from their schools about how they can use AI. While many are getting used to setting boundaries for their students, they don’t know in what capacity they can use AI tools to improve their jobs.

    The survey found that 19% of teachers are employed at schools with an AI policy. During the 2024-25 school year, 68% of those surveyed said they didn’t receive training on how to use AI tools. Roughly half of them taught themselves how to use it.

    “There aren’t very many buildings or districts that are giving really clear instructions, and we kind of see that hindering the adoption and use among both students and teachers,” Hrynowski said. “We probably need to start looking at having a more systematic approach to laying down the ground rules and establishing where you can, can’t, should or should not, use AI In the classroom.”

    Disclosure: Walton Family Foundation provides financial support to The 74.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • College grad unemployment surges as employers replace new hires with AI (CBS News)

    College grad unemployment surges as employers replace new hires with AI (CBS News)

    The unemployment rate for new college graduates has recently surged. Economists say businesses are now replacing entry-level jobs with artificial intelligence.

     

    Source link

  • A Twilight Zone Warning for the Trump Era and the Age of AI

    A Twilight Zone Warning for the Trump Era and the Age of AI

    Rod Serling’s classic 1961 episode of The Twilight Zone, “The Obsolete Man,” offers a timeless meditation on authoritarianism, conformity, and the erasure of humanity. In it, a quiet librarian, Romney Wordsworth (played by Burgess Meredith), is deemed “obsolete” by a dystopian state for believing in books and God—symbols of individual thought and spiritual meaning. Condemned by a totalitarian chancellor and scheduled for execution, Wordsworth calmly exposes the cruelty and contradictions of the regime, ultimately reclaiming his dignity by refusing to bow to tyranny.

    Over 60 years later, “The Obsolete Man” feels less like fiction and more like a documentary. The Trump era, supercharged by the rise of artificial intelligence and a war on truth, has brought Serling’s chilling parable into sharper focus.

    The Authoritarian Impulse

    President Donald Trump’s presidency—and his ongoing influence—has been marked by a deep antagonism toward democratic institutions, intellectual life, and perceived “elites.” Journalists were labeled “enemies of the people.” Scientists and educators were dismissed or silenced. Books were banned in schools and libraries, and curricula were stripped of “controversial” topics like systemic racism or gender identity.

    Like the chancellor in The Obsolete Man, Trump and his allies seek not just to discredit dissenters but to erase their very legitimacy. In this worldview, librarians, teachers, and independent thinkers are expendable. What matters is loyalty to the regime, conformity to its ideology, and performance of power.

    Wordsworth’s crime—being a librarian and a believer—is mirrored in real-life purges of professionals deemed out of step with a hardline political agenda. Public educators and college faculty who challenge reactionary narratives have been targeted by state legislatures, right-wing activists, and billionaire-backed think tanks. In higher education, departments of the humanities are being defunded or eliminated entirely. Faculty governance is undermined. The university, once a space for critical inquiry, is increasingly treated as an instrument for ideological control—or as a business to be stripped for parts.

    The Age of AI and the Erasure of the Human

    While authoritarianism silences the human spirit, artificial intelligence threatens to replace it. AI tools, now embedded in everything from hiring algorithms to classroom assessments, are reshaping how knowledge is produced, disseminated, and controlled. In the rush to adopt these technologies, questions about ethics, bias, and human purpose are often sidelined.

    AI systems do not “believe” in anything. They do not feel awe, doubt, or moral anguish. They calculate, replicate, and optimize. In the hands of authoritarian regimes or profit-driven institutions, AI becomes a tool not of liberation, but of surveillance, censorship, and disposability. Workers are replaced. Students are reduced to data points. Librarians—like Wordsworth—are no longer needed in a world where books are digitized and curated by opaque algorithms.

    This is not merely a future problem. It’s here. Algorithms already determine who gets hired, who receives financial aid, and which students are flagged as “at risk.” Predictive policing, automated grading, and AI-generated textbooks are not the stuff of science fiction. They are reality. And those who question their fairness or legitimacy risk being labeled as backwards, inefficient—obsolete.

    A Culture of Disposability

    At the heart of “The Obsolete Man” is a question about value: Who decides what is worth keeping? In Trump’s America and in the AI-driven economy, people are judged by their utility to the system. If you’re not producing profit, performing loyalty, or conforming to power, you can be cast aside.

    This is especially true for the working class, contingent academics, and the so-called “educated underclass”—a growing population of debt-laden degree holders trapped in precarious jobs or no jobs at all. Their degrees are now questioned, their labor devalued, and their futures uncertain. They are told that if they can’t “pivot” or “reskill” for the next technological shift, they too may be obsolete.

    The echoes of The Twilight Zone are deafening.

    Resistance and Redemption

    Yet, as Wordsworth demonstrates in his final moments, resistance is possible. Dignity lies in refusing to surrender the soul to the machine—or the regime. In his quiet defiance, Wordsworth forces the chancellor to confront his own cowardice, exposing the hollow cruelty of the system.

    In our time, that resistance takes many forms: educators who continue to teach truth despite political pressure; librarians who fight book bans; whistleblowers who challenge surveillance technologies; and students who organize for justice. These acts of courage and conscience remind us that obsolescence is not a matter of utility—it’s a judgment imposed by those in power, and it can be rejected.

    Rod Serling ended his episode with a reminder: “Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man—that state is obsolete.”

    The question now is whether we will heed the warning. In an age where authoritarianism and AI threaten to render us all obsolete, will we remember what it means to be human?


    The Higher Education Inquirer welcomes responses and reflections on how pop culture can illuminate our present crises. Contact us with your thoughts or your own essay proposals.

    Source link

  • USyd responds to student concerns about ‘two-lane’ AI policy – Campus Review

    USyd responds to student concerns about ‘two-lane’ AI policy – Campus Review

    The university arguably leading the sector in its use of artificial intelligence (AI) in assessment tasks has received criticism from some students who have complained they lost marks for not using AI in a test.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • What does it mean if students think that AI is more intelligent than they are?

    What does it mean if students think that AI is more intelligent than they are?

    The past couple of years in higher education have been dominated by discussions of generative AI – how to detect it, how to prevent cheating, how to adapt assessment. But we are missing something more fundamental.

    AI isn’t just changing how students approach their work – it’s changing how they see themselves. If universities fail to address this, they risk producing graduates who lack both the knowledge and the confidence to succeed in employment and society. Consequently, the value of a higher education degree will diminish.

    In November, a first-year student asked me if ChatGPT could write their assignment. When I said no, they replied: “But AI is more intelligent than me.” That comment has stayed with me ever since.

    If students no longer trust their own ability to contribute to discussions or produce work of value, the implications stretch far beyond academic misconduct. Confidence is affecting motivation, resilience and self-belief, which, consequently, effects sense of community, assessment grades, and graduate skills.

    I have noticed that few discussions focus on the deeper psychological shift – students’ changing perceptions of their own intelligence and capability. This change is a key antecedent for the erosion of a sense of community, AI use in learning and assessment, and the underdevelopment of graduate skills.

    The erosion of a sense of community

    In 2015 when I began teaching, I would walk into a seminar room and find students talking to one another about how worried they were for the deadline, how boring the lecture was, or how many drinks they had Wednesday night. Yes, they would sit at the back, not always do the pre-reading, and go quiet for the first few weeks when I asked a question – but they were always happy to talk to one another.

    Fast forward to 2025, campus feels empty, and students come into class and sit alone. Even final years who have been together for three years, may sit with a “friend” but not really say anything as they stare at phones. I have a final year student who is achieving first class grades, but admitted he has not been in the library once this academic year and he barely knows anyone to talk to. This may not seem like a big thing, but it illustrates the lack of community and relationships that are formed at university. It is well known that peer-to-peer relationships are one of the biggest influencers on attendance and engagement. So when students fail to form networks, it is unsurprising that motivation declines.

    While professional services, student union, and support staff are continuously offering ways to improve the community, at a time where students are working longer hours and through a cost of living, we cannot expect students to attend extracurricular academic or non-academic activities. Therefore, timetabled lectures and seminars need to be at the heart of building relationships.

    AI in learning and assessment

    While marking first-year marketing assignments – a subject I’ve taught across multiple universities for a decade – I noticed a clear shift. Typically, I expect a broad range of marks, but this year, students clustered at two extremes: either very high or alarmingly low. The feedback was strikingly similar: “too vague,” “too descriptive,” “missing taught content.”

    I knew some of these students were engaged and capable in class, yet their assignments told a different story. I kept returning to that student’s remark and realised: the students who normally land in the middle – your solid 2:2 and 2:1 cohort – had turned to AI. Not necessarily to cheat, but because they lacked confidence in their own ability. They believed AI could articulate their ideas better than they could.

    The rapid integration of AI into education isn’t just changing what students do – it’s changing what they believe they can do. If students don’t think they can write as well as a machine, how can we expect them to take intellectual risks, engage critically, or develop the resilience needed for the workplace?

    Right now, universities are at a crossroads. We can either design assessments as if nothing has changed, pivot back to closed-book exams to preserve “authentic” academic work, or restructure assessment to empower students, build confidence, and provide something of real value to both learners and employers. Only the third option moves higher education forward.

    Deakin University’s Phillip Dawson has recently argued that we must ensure assessment measures what we actually intend to assess. His point resonated with me.

    AI is here to stay, and it can enhance learning and productivity. Instead of treating it primarily as a threat or retreating to closed-book exams, we need to ask: what do we really need to assess? For years, we have moved away from exams because they don’t reflect real-world skills or accurately measure understanding. That reasoning still holds, but the assessment landscape is shifting again. Instead of focusing on how students write about knowledge, we should be assessing how they apply it.

    Underdevelopment of graduate skills

    If we don’t rethink pedagogy and assessment, we risk producing graduates who are highly skilled at facilitating AI rather than using it as a tool for deeper analysis, problem-solving, and creativity. Employers are already telling us they need graduates who can analyse and interpret data, think critically to solve problems, communicate effectively, show resilience and adaptability, demonstrate emotional intelligence, and work collaboratively.

    But students can’t develop these skills if they don’t believe in their own ability.

    Right now, students are using AI tools for most activities, including online searching, proof reading, answering questions, generating examples, and even writing reflective pieces. I am confident that if I asked first years to write a two-minute speech about why they came to university, the majority would use AI in some way. There is no space – or incentive – for them to illustrate their skill development.

    This semester, I trialled a small intervention after getting fed up with looking at heads down in laptops. I asked my final year students to put laptops and phones on the floor for the first two hours of a four-hour workshop.

    At first, they were visibly uncomfortable – some looked panicked, others bored. But after ten minutes, something changed. They wrote more, spoke more confidently, and showed greater creativity. As soon as they returned to technology, their expressions became blank again. This isn’t about banning AI, but about ensuring students have fun learning and have space to be thinkers, rather than facilitators.

    Confidence-building

    If students’ lack of confidence is driving them to rely on AI to “play it safe”, we need to acknowledge the systemic problem. Confidence is an academic issue. Confidence underpins everything in the student’s experience: classroom engagement, sense of belonging, motivation, resilience, critical thinking, and, of course, assessment quality. Universities know this, investing in mentorship schemes, support services, and initiatives to foster belonging. But confidence-building cannot be left to professional services alone – it must be embedded into curriculum design and assessment.

    Don’t get me wrong, I am fully aware of the pressures of academic staff, and telling them to improve sense of community, assessment, and graduate skills feels like another time-consuming task. Universities need to recognise that without improving workload planning models to allow academics freedom to focus on and explore pedagogic approaches, we fall into the trap of devaluing the degree.

    In addition, universities want to stay relevant, they need agile structures that allow academics to test new approaches and respond quickly, just like the “real world”. Academics should not be creating or modifying assessments today that won’t be implemented for another 18 months. Policies designed to ensure quality must also ensure adaptability. Otherwise, higher education will always be playing catch-up – first with AI, then with whatever comes next.

    Will universities continue producing AI-dependent graduates, or will they equip students with the confidence to lead in an AI-driven world?

    Source link