Category: Technology

  • Experts react to artificial intelligence plan – Campus Review

    Experts react to artificial intelligence plan – Campus Review

    Australia’s first national plan for artificial intelligence aims to upskill workers to boost productivity, but will leave the tech largely unregulated and without its own legislation to operate under.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Gender governance and the global grammar of illiberal inclusion

    Gender governance and the global grammar of illiberal inclusion

    by Ourania Filippakou

    Across global higher education, the terms of justice, equality and inclusion are being rewritten. In recent years, the rollback of diversity, equity and inclusion (DEI) initiatives in the United States (Spitalniak, 2025) has unfolded alongside a global resurgence of anti-gender, ultra-nationalist, racialised and colonial politics (Brechenmacher, 2025). At the same time, the rise of authoritarian and far-right ideologies, together with deepening socioeconomic inequalities fuelled by an ascendant billionaire class (Klein and Taylor, 2025) and the growing portrayal of feminist and queer scholarship as ideological extremism (Pitts-Taylor and Wood, 2025), signal a profound shift in the rationalities shaping the politics of higher education. These developments do not reject inclusion; they refashion it. Equality becomes excess, dissent is recast as disorder, and inclusion is reconstituted as a technology of governance.

    This conjuncture, what Stuart Hall (Hall in Hall and Massey, 2010, p57) would call the alignment of economic, political and cultural forces, requires a vocabulary capable of capturing continuity and rupture. It also reflects the deepening crisis of neoliberalism, whose governing logics become more coercive as their legitimacy wanes (Beckert, 2025; Menand, 2023). As Hall reminds us, ‘a conjuncture is a period when different social, political, economic and ideological contradictions… or as Althusser said ‘fuse in a ruptural unity’’ (Hall in Hall and Massey, 2012, p57). A conjuncture, in this sense, does not resolve crisis but produces new configurations of ideological coherence and institutional control. In my recent article, ‘Managed Inclusion and the Politics of Erasure: Gender Governance in Higher Education under Neoliberal Authoritarianism’ (Review of Education, Pedagogy & Cultural Studies, 2025), I theorise these developments as a global grammar of illiberal inclusion: a political rationality that appropriates the language of equity while disabling its redistributive, democratic and epistemic force. The article develops a typology of symbolic, technocratic and transformative inclusion to examine how feminist, anti-caste and critical vocabularies are increasingly absorbed into systems of civility, visibility and procedural control. Transformative inclusion, the configuration most aligned with redistribution, dissent and epistemic plurality, is the one most forcefully neutralised.

    Across geopolitical contexts, from postcolonial states to liberal democracies, gender inclusion is increasingly appropriated not as a demand for justice but as a mechanism of control. The techniques of co-option vary, yet they consolidate into a shared political rationality in which equity is stripped of redistributive force and redeployed to affirm institutional legitimacy, nationalist virtue and market competitiveness. This is not a rupture with neoliberal governance but its intensification through more disciplinary and exclusionary forms. For example, in India, the National Education Policy 2020 invokes empowerment while enacting epistemic erasure, systematically marginalising the knowledges of women from subordinated caste, class and religious communities (Peerzada et al, 2024; Patil, 2023; Singh, 2023). At the same time, state-led campaigns such as Beti Bachao elevate women’s visibility only within ideals of modesty and nationalist virtue (Chhachhi, 2020). In Hungary, the 2018 ban on gender studies aligned higher education with labour-market imperatives and nationalist agendas (Barát, 2022; Zsubori, 2018). In Turkey, reforms under Erdoğan consolidate patriarchal norms while constraining feminist organising (Zihnioğlu and Kourou, 2025). Here, gender inclusion is tolerated only when it reinforces state agendas and restricts dissent.

    Elsewhere, inclusion is recast as ideological deviance. In the United States, the Trump-era rollback of DEI initiatives and reproductive rights has weaponised inclusion as a spectre of radicalism, disproportionately targeting racialised and LGBTQ+ communities (Amnesty International, 2024; Chao-Fong, 2025). In Argentina, Milei abolished the Ministry of Women, describing feminism as fiscally irresponsible (James, 2024). In Italy, Meloni’s government invokes ‘traditional values’ to erode anti-discrimination frameworks (De Giorgi et al, 2023, p.v11i1.6042). In these cases, inclusion is not merely neutralised but actively vilified, its political charge reframed as cultural threat.

    Even when inclusion is celebrated, it is tethered to respectability and moral legibility. In France, femonationalist discourses instrumentalise gender equality to legitimise anti-Muslim policy (Farris, 2012; Möser, 2022). In Greece, conservative statecraft reframes inclusion through familialist narratives while dismantling equality infrastructures (Bempeza, 2025). These patterns reflect a longer political repertoire in which authoritarian and ultra-nationalist projects mobilise idealised domestic femininity to naturalise social hierarchies. As historian Diana Garvin (Garvin quoted in Matei, 2025) notes, ‘what fascisms old and new have in common is they tend to look to women to fill in the gaps that the state misses’, with contemporary ‘womanosphere’ influencers in the US reviving fantasies of domestic bliss that obscure intensified gendered precarity (Matei, 2025).

    Such gendered constructions coexist with escalating violence. More than 50.000 women and girls were killed by intimate partners or family members in 2024, which means one woman or girl was killed every ten minutes, or 137 every day, according to the latest UNODC and UN Women femicide report (UNODC/UN Women, 2025). This sits within a wider continuum of harm: 83.000 women and girls were intentionally killed last year, and the report finds no sign of real progress. It also highlights a steep rise in digital violence, including harassment, stalking, gendered disinformation and deepfakes, which increasingly spills into offline contexts and contributes to more lethal forms of harm. These global patterns intersect with regional crises. For example, more than 7.000 women were killed in India in gender-related violence in 2022 (NCRB, 2023); eleven women are murdered daily in femicides across Latin America (NU CEPAL, 2024). At the same time, masculinist influencers such as Andrew Tate cultivate transnational publics organised around misogyny (Adams, 2025; Wescott et al, 2024). As UN Secretary-General António Guterres (2025) warns: ‘Instead of mainstreaming equal rights, we are seeing the mainstreaming of misogyny’.

    These global pressures reverberate across institutions that have historically positioned themselves as democratic spaces, including universities, which increasingly recast gender equity as a reputational risk or cultural flashpoint rather than a democratic obligation (D’Angelo et al, 2024; McEwen and Narayanaswamy, 2023). Equity becomes an emblem of modernity to be audited, displayed and curated, rather than a demand for justice. Ahmed’s (2012) theorisation of non-performativity is essential here: institutions declare commitments to equality precisely to contain the transformations such commitments would require. In this context, symbolic and technocratic inclusion flourish, while the structural conditions for transformative inclusion continue to narrow.

    These shifts reflect broader political and economic formations. Brown (2015) shows how neoliberal reason converts justice claims into performance demands, hollowing out democratic vocabularies. Fraser’s (2017) account of ‘progressive neoliberalism’ illuminates the terrain in which market liberalism coupled with selective diversity politics absorbs emancipatory discourse while preserving inequality. Patnaik (2021) argues that the rise of neofascism is a political necessity for neoliberalism in crisis, as rights are redefined as privileges and inclusion is repurposed to stabilise inequality. In this conjuncture, these tendencies intensify into what Giroux (2018, 2021, 2022a) names ‘neoliberal fascism’, a formation structured by three interlocking fundamentalisms: a market fundamentalism that commodifies all aspects of life, a religious fundamentalism that moralises inequality; and a regime of manufactured ignorance and militarised illiteracy that discredits critical thought and erases historical memory (Giroux 2022b, p48-54).

    The United States now offers a further manifestation of this global pattern, illustrating how attacks on DEI can function as a broader assault on higher education. As recent analyses of US politics show, the first and particularly the second Trump administration is actively modelling itself on Viktor Orbán’s illiberal statecraft, centralising executive power, purging public institutions and mobilising ‘family values’ and anti-‘woke’ politics to reshape education and media governance (Giroux, 2017; Smith, 2025; Kauffmann, 2025). The dismantling of DEI under the Trump administration, framed as a defence of merit, free speech and fiscal responsibility (The White House, 2025), marks the beginning of a wider attempt to consolidate political influence over higher education. Executive orders targeting DEI have been followed by lawsuits, funding withdrawals and intensified federal scrutiny, prompting universities such as Michigan, Columbia and Chicago to scale back equality infrastructures, cut programmes and reduce humanities provision (cf Bleiler, 2025; Pickering, Cosgrove and Massel, 2025; Quinn, 2025). These developments do not simply eliminate DEI; they position anti-gender politics as a mechanism of disciplining universities, narrowing intellectual autonomy and extending political control over academic life. They exemplify wider global tendencies in which inclusion becomes a field through which illiberal projects consolidate authority. The assault on DEI is thus not a uniquely American phenomenon but part of a broader authoritarian turn in which inclusion is recoded to stabilise, rather than challenge, existing power.

    Understanding gender governance in higher education through this conjunctural lens reveals not merely the erosion of equity but the emergence of a political formation that reconfigures inclusion into an apparatus of civility, visibility and administrative control. These tendencies are not aberrations but expressions of a larger global grammar that binds emancipatory rhetoric to authoritarian-neoliberal governance. The result is not the dilution of equality but its rearrangement as a practice of containment.

    The implications for the sector are profound. If inclusion is increasingly reorganised through metrics, decorum and procedural compliance, then reclaiming its democratic potential requires an epistemic and institutional shift. Inclusion needs to be understood not as a reputational asset but as a commitment to justice, redistribution and collective struggle. This means recovering equality as political and pedagogical labour: the work of confronting injustice, protecting dissent and renewing the public imagination. Academic freedom and equality are inseparable: without equality, freedom becomes privilege; without freedom, equality becomes performance.

    As Angela Davis (Davis quoted in Gerges, 2023) reminds us: ‘Diversity without structural transformation simply brings those who were previously excluded into a system as racist and misogynist as it was before… There can be no diversity and inclusion without transformation and justice.’ And as Henry Giroux (2025) argues, democracy depends on how societies fight over language, memory and possibility. That struggle now runs through the university itself, shaping its governance, its epistemic life and the courage to imagine more just and democratic possibilities.

    Ourania Filippakou is a Professor of Education at Brunel University of London. Her research interrogates the politics of higher education, examining universities as contested spaces where power, inequality, and resistance intersect. Rooted in critical traditions, she explores how higher education can foster social justice, equity, and transformative change.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • 3 reasons to switch to virtual set design

    3 reasons to switch to virtual set design

    Key points:

    If you’ve attended a professional show or musical recently, chances are you’ve seen virtual set design in action. This approach to stage production has gained so much traction it’s now a staple in the industry. After gaining momentum in professional theater, it has made its way into collegiate performing arts programs and is now emerging in K-12 productions as well.

    Virtual set design offers a modern alternative to traditional physical stage sets, using technology and software to create immersive backdrops and environments. This approach unlocks endless creative possibilities for schools while also providing practical advantages.

    Here, I’ll delve into three key benefits: increasing student engagement and participation, improving efficiency and flexibility in productions, and expanding educational opportunities.

    Increasing student engagement and participation

    Incorporating virtual set design into productions gets students excited about learning new skills while enhancing the storytelling of a show. When I first joined Churchill High School in Livonia, Michigan as the performing arts manager, the first show we did was Shrek the Musical, and I knew it would require an elaborate set. While students usually work together to paint the various backdrops that bring the show to life, I wanted to introduce them to collaborating on virtual set design.

    We set up Epson projectors on the fly rail and used them to project images as the show’s backdrops. Positioned at a short angle, the projectors avoided any shadowing on stage. To create a seamless image with both projectors, we utilized edge-blending and projection mapping techniques using just a Mac® laptop and QLab software. Throughout the performance, the projectors transformed the stage with a dozen dynamic backdrops, shifting from a swamp to a castle to a dungeon.

    Students were amazed by the technology and very excited to learn how to integrate it into the set design process. Their enthusiasm created a real buzz around the production, and the community’s feedback on the final results were overwhelmingly positive.

    Improving efficiency and flexibility

    During Shrek the Musical, there were immediate benefits that made it so much easier to put together a show. To start, we saved money by eliminating the need to build multiple physical sets. While we were cutting costs on lumber and materials, we were also solving design challenges and expanding what was possible on stage.

    This approach also saved us valuable time. Preparing the sets in the weeks leading up to the show was faster, and transitions during performances became seamless. Instead of moving bulky scenery between scenes or acts, the stage crew simply switched out projected images making it much more efficient.

    We saw even more advantages in our spring production of She Kills Monsters. Some battle scenes called for 20 or 30 actors to be on stage at once, which would have been difficult to manage with a traditional set. By using virtual production, we broke the stage up with different panels spaced apart and projected designs, creating more space for performers. We were able to save physical space, as well as create a design that helped with stage blocking and made it easier for students to find their spots.

    Since using virtual sets, our productions have become smoother, more efficient, and more creative.

    Expanding educational opportunities

    Beyond the practical benefits, virtual set design also creates valuable learning opportunities for students. Students involved in productions gain exposure to industry-level technology and learn about careers in the arts, audio, and video technology fields. Introducing students to these opportunities before graduating high school can really help prepare them for future success.

    Additionally, in our school’s technical theater courses, students are learning lessons on virtual design and gaining hands-on experiences. As they are learning about potential career paths, they are developing collaboration skills and building transferable skills that directly connect to college and career readiness.

    Looking ahead with virtual set design

    Whether students are interested in graphic design, sound engineering, or visual technology, virtual production brings countless opportunities to them to explore. It allows them to experiment with tools and concepts that connect directly to potential college majors or future careers.

    For schools, incorporating virtual production into high school theater offers more than just impressive shows. It provides a cost-effective, flexible, and innovative approach to storytelling. It is a powerful tool that benefits productions, enriches student learning, and prepares the next generation of artists and innovators.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Resilient learning begins with Zero Trust and cyber preparedness

    Resilient learning begins with Zero Trust and cyber preparedness

    Key points:

    The U.K.’s Information Commissioner’s Office (ICO) recently warned of a surge in cyberattacks from “insider threats”–student hackers motivated by dares and challenges–leading to breaches across schools. While this trend is unfolding overseas, it underscores a risk that is just as real for the U.S. education sector. Every day, teachers and students here in the U.S. access enormous volumes of sensitive information, creating opportunities for both mistakes and deliberate misuse. These vulnerabilities are further amplified by resource constraints and the growing sophistication of cyberattacks.

    When schools fall victim to a cyberattack, the disruption extends far beyond academics. Students may also lose access to meals, safe spaces, and support services that families depend on every day. Cyberattacks are no longer isolated IT problems–they are operational risks that threaten entire communities.

    In today’s post-breach world, the challenge is not whether an attack will occur, but when. The risks are real. According to a recent study, desktops and laptops remain the most compromised devices (50 percent), with phishing and Remote Desktop Protocol (RDP) cited as top entry points for ransomware. Once inside, most attacks spread laterally across networks to infect other devices. In over half of these cases (52 percent), attackers exploited unpatched systems to move laterally and escalate system privileges.

    That reality demands moving beyond traditional perimeter defenses to strategies that contain and minimize damage once a breach occurs. With the school year underway, districts must adopt strategies that proactively manage risk and minimize disruption. This starts with an “assume breach” mindset–accepting that prevention alone is not enough. From there, applying Zero Trust principles, clearly defining the ‘protect surface’ (i.e. identifying what needs protection), and reinforcing strong cyber hygiene become essential next steps. Together, these strategies create layered resilience, ensuring that even if attackers gain entry, their ability to move laterally and cause widespread harm is significantly reduced.

    Assume breach: Shifting from prevention to resilience

    Even in districts with limited staff and funding, schools can take important steps toward stronger security. The first step is adopting an assume breach mindset, which shifts the focus from preventing every attack to ensuring resilience when one occurs. This approach acknowledges that attackers may already have access to parts of the network and reframes the question from “How do we keep them out?” to “How do we contain them once they are in?” or “How do we minimize the damage once they are in?”

    An assume breach mindset emphasizes strengthening internal defenses so that breaches don’t become cyber disasters. It prioritizes safeguarding sensitive data, detecting anomalies quickly, and enabling rapid responses that keep classrooms open even during an active incident.

    Zero Trust and seatbelts: Both bracing for the worst

    Zero Trust builds directly on the assume breach mindset with its guiding principle of “never trust, always verify.” Unlike traditional security models that rely on perimeter defenses, Zero Trust continuously verifies every user, device, and connection, whether internal or external.

    Schools often function as open transit hubs, offering broad internet access to students and staff. In these environments, once malware finds its way in, it can spread quickly if unchecked. Perimeter-only defenses leave too many blind spots and do little to stop insider threats. Zero Trust closes those gaps by treating every request as potentially hostile and requiring ongoing verification at every step.

    A fundamental truth of Zero Trust is that cyberattacks will happen. That means building controls that don’t just alert us but act–before and during a network intrusion. The critical step is containment: limiting damage the moment a breach is successful.  

    Assume breach accepts that a breach will happen, and Zero Trust ensures it doesn’t become a disaster that shuts down operations. Like seatbelts in a car–prevention matters. Strong brakes are essential, but seatbelts and airbags minimize the harm when prevention fails. Zero Trust works the same way, containing threats and limiting damage so that even if an attacker gets in, they can’t turn an incident into a full-scale disaster.

    Zero Trust does not require an overnight overhaul. Schools can start by defining their protect surface – the vital data, systems, and operations that matter most. This typically includes Social Security numbers, financial data, and administrative services that keep classrooms functioning. By securing this protect surface first, districts reduce the complexity of Zero Trust implementation, allowing them to focus their limited resources on where they are needed most.

    With this approach, Zero Trust policies can be layered gradually across systems, making adoption realistic for districts of any size. Instead of treating it as a massive, one-time overhaul, IT leaders can approach Zero Trust as an ongoing journey–a process of steadily improving security and resilience over time. By tightening access controls, verifying every connection, and isolating threats early, schools can contain incidents before they escalate, all without rebuilding their entire network in one sweep.  

    Cyber awareness starts in the classroom

    Technology alone isn’t enough. Because some insider threats stem from student curiosity or misuse, cyber awareness must start in classrooms. Integrating security education into the learning environment ensures students and staff understand their role in protecting sensitive information. Training should cover phishing awareness, strong password practices, the use of multifactor authentication (MFA), and the importance of keeping systems patched.

    Building cyber awareness does not require costly programs. Short, recurring training sessions for students and staff keep security top of mind and help build a culture of vigilance that reduces both accidental and intentional insider threats.

    Breaches are inevitable, but disasters are optional

    Breaches are inevitable. Disasters are not. The difference lies in preparation. For resource-strapped districts, stronger cybersecurity doesn’t require sweeping overhauls. It requires a shift in mindset:

    • Assume breach
    • Define the protect surface
    • Implement Zero Trust in phases
    • Instill cyber hygiene

    When schools take this approach, cyberattacks become manageable incidents. Classrooms remain open, students continue learning, and communities continue receiving the vital support schools provide – even in the face of disruption. Like seatbelts in a car, these measures won’t prevent every crash – but they ensure schools can continue to function even when prevention fails.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    If, like me, you grew up watching Looney Tunes cartoons, you may remember Yosemite Sam’s popular phrase, “There’s gold in them thar hills.”

    In surveys, as in gold mining, the greatest riches are often hidden and difficult to extract. This principle is perhaps especially true when institutions are seeking to enhance the postgraduate taught (PGT) student experience.

    PGT students are far more than an extension of the undergraduate community; they represent a crucial, diverse and financially significant segment of the student body. Yet, despite their growing numbers and increasing strategic importance, PGT students, as Kelly Edmunds and Kate Strudwick have recently pointed out on Wonkhe, remain largely invisible in both published research and core institutional strategy.

    Advance HE’s Postgraduate Taught Experience Survey (PTES) is therefore one of the few critical insights we have about the PGT experience. But while the quantitative results offer a (usually fairly consistent) high-level view, the real intelligence required to drive meaningful enhancement inside higher education institutions is buried deep within the thousands of open-text comments collected. Faced with the sheer volume of data the choice is between eye-ball scanning and the inevitable introduction of human bias, or laborious and time-consuming manual coding. The challenge for the institutions participating in PTES this year isn’t the lack of data: it’s efficiently and reliably turning that dense, often contradictory, qualitative data into actionable, ethical, and equitable insights.

    AI to the rescue

    The application of machine learning AI technology to analysis of qualitative student survey data presents us with a generational opportunity to amplify the student voice. The critical question is not whether AI should be used, but how to ensure its use meets robust and ethical standards. For that you need the right process – and the right partner – to prioritise analytical substance, comprehensiveness, and sector-specific nuance.

    UK HE training is non-negotiable. AI models must be deeply trained on a vast corpus of UK HE student comments. Without this sector-specific training, analysis will fail to accurately interpret the nuances of student language, sector jargon, and UK-specific feedback patterns.

    Analysis must rely on a categorisation structure that has been developed and refined against multiple years of PTES data. This continuity ensures that the thematic framework reflects the nuances of the PGT experience.

    To drive targeted enhancement, the model must break down feedback into highly granular sub-themes – moving far beyond simplistic buckets – ensuring staff can pinpoint the exact issue, whether it falls under learning resources, assessment feedback, or thesis supervision.

    The analysis must be more than a static report. It must be delivered through integrated dashboard solutions that allow institutions to filter, drill down, and cross-reference the qualitative findings with demographic and discipline data. Only this level of flexibility enables staff to take equitable and targeted enhancement actions across their diverse PGT cohorts.

    When these principles are prioritised, the result is an analytical framework specifically designed to meet the rigour and complexity required by the sector.

    The partnership between Advance HE, evasys, and Student Voice AI, which analysed this year’s PTES data, demonstrates what is possible when these rigorous standards are prioritised. We have offered participating institutions a comprehensive service that analyses open comments alongside the detailed benchmarking reports that Advance HE already provides. This collaboration has successfully built an analytical framework that exemplifies how sector-trained AI can deliver high-confidence, actionable intelligence.

    Jonathan Neves, Head of Research and Surveys, Advance HE calls our solution “customised, transparent and genuinely focused on improving the student experience, “ and adds, “We’re particularly impressed by how they present the data visually and look forward to seeing results from using these specialised tools in tandem.”

    Substance uber alles

    The commitment to analytical substance is paramount; without it, the risk to institutional resources and equity is severe. If institutions are to derive value, the analysis must be comprehensive. When the analysis lacks this depth institutional resources are wasted acting on partial or misleading evidence.

    Rigorous analysis requires minimising what we call data leakage: the systematic failure to capture or categorise substantive feedback. Consider the alternative: when large percentages of feedback are ignored or left uncategorised, institutions are effectively muting a significant portion of the student voice. Or when a third of the remaining data is lumped into meaningless buckets like “other,” staff are left without actionable insight, forced to manually review thousands of comments to find the true issues.

    This is the point where the qualitative data, intended to unlock enhancement, becomes unusable for quality assurance. The result is not just a flawed report, but the failure to deliver equitable enhancement for the cohorts whose voices were lost in the analytical noise.

    Reliable, comprehensive processing is just the first step. The ultimate goal of AI analysis should be to deliver intelligence in a format that seamlessly integrates into strategic workflows. While impressive interfaces are visually appealing, genuine substance comes from the capacity to produce accurate, sector-relevant outputs. Institutions must be wary of solutions that offer a polished facade but deliver compromised analysis. Generic generative AI platforms, for example, offer the illusion of thematic analysis but are not robust.

    But robust validation of any output is still required. This is the danger of smoke and mirrors – attractive dashboards that simply mask a high degree of data leakage, where large volumes of valuable feedback are ignored, miscategorised or rendered unusable by failing to assign sentiment.

    Dig deep, act fast

    When institutions choose rigour, the outcomes are fundamentally different, built on a foundation of confidence. Analysis ensures that virtually every substantive PGT comment is allocated to one or more UK-derived categories, providing a clear thematic structure for enhancement planning.

    Every comment with substance is assigned both positive and negative sentiment, providing staff with the full, nuanced picture needed to build strategies that leverage strengths while addressing weaknesses.

    This shift from raw data to actionable intelligence allows institutions to move quickly from insight to action. As Parama Chaudhury, Pro-Vice Provost (Education – Student Academic Experience) at UCL noted, the speed and quality of this approach “really helped us to get the qualitative results alongside the quantitative ones and encourage departmental colleagues to use the two in conjunction to start their work on quality enhancement.”

    The capacity to produce accurate, sector-relevant outputs, driven by rigorous processing, is what truly unlocks strategic value. Converting complex data tables into readable narrative summaries for each theme allows academic and professional services leaders alike to immediately grasp the findings and move to action. The ability to access categorised data via flexible dashboards and in exportable formats ensures the analysis is useful for every level of institutional planning, from the department to the executive team. And providing sector benchmark reports allows institutions to understand their performance relative to peers, turning internal data into external intelligence.

    The postgraduate taught experience is a critical pillar of UK higher education. The PTES data confirms the challenge, but the true opportunity lies in how institutions choose to interpret the wealth of student feedback they receive. The sheer volume of PGT feedback combined with the ethical imperative to deliver equitable enhancement for all students demands analytical rigour that is complete, nuanced, and sector-specific.

    This means shifting the focus from simply collecting data to intelligently translating the student voice into strategic priorities. When institutions insist on this level of analytical integrity, they move past the risk of smoke and mirrors and gain the confidence to act fast and decisively.

    It turns out Yosemite Sam was right all along: there’s gold in them thar hills. But finding it requires more than just a map; it requires the right analytical tools and rigour to finally extract that valuable resource and forge it into meaningful institutional change.

    This article is published in association with evasys. evasys and Student Voice AI are offering no-cost advanced analysis of NSS open comments delivering comprehensive categorisation and sentiment analysis, secure dashboard to view results and a sector benchmark report. Click here to find out more and request your free analysis.

    Source link

  • Preserving critical thinking amid AI adoption

    Preserving critical thinking amid AI adoption

    Key points:

    AI is now at the center of almost every conversation in education technology. It is reshaping how we create content, build assessments, and support learners. The opportunities are enormous. But one quiet risk keeps growing in the background: losing our habit of critical thinking.

    I see this risk not as a theory but as something I have felt myself.

    The moment I almost outsourced my judgment

    A few months ago, I was working on a complex proposal for a client. Pressed for time, I asked an AI tool to draft an analysis of their competitive landscape. The output looked polished and convincing. It was tempting to accept it and move on.

    Then I forced myself to pause. I began questioning the sources behind the statements and found a key market shift the model had missed entirely. If I had skipped that short pause, the proposal would have gone out with a blind spot that mattered to the client.

    That moment reminded me that AI is fast and useful, but the responsibility for real thinking is still mine. It also showed me how easily convenience can chip away at judgment.

    AI as a thinking partner

    The most powerful way to use AI is to treat it as a partner that widens the field of ideas while leaving the final call to us. AI can collect data in seconds, sketch multiple paths forward, and expose us to perspectives we might never consider on our own.

    In my own work at Magic EdTech, for example, our teams have used AI to quickly analyze thousands of pages of curriculum to flag accessibility issues. The model surfaces patterns and anomalies that would take a human team weeks to find. Yet the real insight comes when we bring educators and designers together to ask why those patterns matter and how they affect real classrooms. AI sets the table, but we still cook the meal.

    There is a subtle but critical difference between using AI to replace thinking and using it to stretch thinking. Replacement narrows our skills over time. Stretching builds new mental flexibility. The partner model forces us to ask better questions, weigh trade-offs, and make calls that only human judgment can resolve.

    Habits to keep your edge

    Protecting critical thinking is not about avoiding AI. It is about building habits that keep our minds active when AI is everywhere.

    Here are three I find valuable:

    1. Name the fragile assumption
    Each time you receive AI output, ask: What is one assumption here that could be wrong? Spend a few minutes digging into that. It forces you to reenter the problem space instead of just editing machine text.

    2. Run the reverse test
    Before you adopt an AI-generated idea, imagine the opposite. If the model suggests that adaptive learning is the key to engagement, ask: What if it is not? Exploring the counter-argument often reveals gaps and deeper insights.

    3. Slow the first draft
    It is tempting to let AI draft emails, reports, or code and just sign off. Instead, start with a rough human outline first. Even if it is just bullet points, you anchor the work in your own reasoning and use the model to enrich–not originate–your thinking.

    These small practices keep the human at the center of the process and turn AI into a gym for the mind rather than a crutch.

    Why this matters for education

    For those of us in education technology, the stakes are unusually high. The tools we build help shape how students learn and how teachers teach. If we let critical thinking atrophy inside our companies, we risk passing that weakness to the very people we serve.

    Students will increasingly use AI for research, writing, and even tutoring. If the adults designing their digital classrooms accept machine answers without question, we send the message that surface-level synthesis is enough. We would be teaching efficiency at the cost of depth.

    By contrast, if we model careful reasoning and thoughtful use of AI, we can help the next generation see these tools for what they are: accelerators of understanding, not replacements for it. AI can help us scale accessibility, personalize instruction, and analyze learning data in ways that were impossible before. But its highest value appears only when it meets human curiosity and judgment.

    Building a culture of shared judgment

    This is not just an individual challenge. Teams need to build rituals that honor slow thinking in a fast AI environment. Another practice is rotating the role of “critical friend” in meetings. One person’s task is to challenge the group’s AI-assisted conclusions and ask what could go wrong. This simple habit trains everyone to keep their reasoning sharp.

    Next time you lean on AI for a key piece of work, pause before you accept the answer. Write down two decisions in that task that only a human can make. It might be about context, ethics, or simple gut judgment. Then share those reflections with your team. Over time this will create a culture where AI supports wisdom rather than diluting it.

    The real promise of AI is not that it will think for us, but that it will free us to think at a higher level.

    The danger is that we may forget to climb.

    The future of education and the integrity of our own work depend on remaining climbers. Let the machines speed the climb, but never let them choose the summit.

    Laura Ascione
    Latest posts by Laura Ascione (see all)

    Source link

  • How to build smarter partnerships and become digitally mature

    How to build smarter partnerships and become digitally mature

    Across higher education, the conversation about digital transformation has shifted from connection to capability. Most universities are digitally connected, yet few are digitally mature

    The challenge for 2026 and beyond is not whether institutions use technology, but whether their systems and partnerships enable people and processes to work together to strengthen institutional capacity, learner outcomes, and agility.

    Boundless Learning’s 2025 Higher Education Technology and Strategy Survey underscored this transition: 95 per cent of leaders said education management partners are appealing, and one in three described them as extremely so. Yet preferences are changing: modular, fee-for-service models now outpace traditional revenue-sharing arrangements, signalling a desire for flexibility and control.

    Leaders also identified their top digital priorities: innovation enablement (53 per cent), streamlined faculty workflows (52 per cent), and integrated analytics (49 per cent). In other words, universities are no longer chasing the next platform; they want systems that think.

    Why systems thinking matters

    That idea is central to Suha Tamim’s workAnalyzing the Complexities of Online Education Systems: A Systems Thinking Perspective. Tamim frames online education as a dynamic ecosystem in which a change in one area, such as technology, pedagogy, or management, ripples through the whole. She argues that institutions need a “systems-level” view connecting the macro (strategy), meso (infrastructure and management), and micro (teaching and learning) layers.

    Seen this way, technology decisions become design choices that shape the culture and operations of the institution. Adopting a new platform is not just an IT project; it influences governance, academic workload, and the student experience. The goal is alignment across those levels so that each reinforces the other.

    Boundless Learning’s Learning Experience Suite (LXS) embodies this approach. Rather than adding another application into an already crowded environment, LXS helps institutions orchestrate existing systems; linking learning management, analytics, and support functions into a cohesive, secure, learner-centred framework. It is a practical application of systems thinking: connecting data flows, surfacing insights, and simplifying faculty and learner experiences within one integrated ecosystem.

    From outsourcing to empowering

    The shift toward integration also reflects how universities engage external partners. Jeffrey Sun, Heather Turner, and Robert Cermak, in the American Journal of Distance Education, describe four main reasons universities outsource online programme management:

    1. Responding quickly to competitive pressures
    2. Accessing upfront capital
    3. Filling capability gaps
    4. Learning and scaling in-house

    Their College Curation Strategy Framework shows that institutions partner with external providers not just to cut costs, but to build strategic capacity. Yet the traditional online programme management (OPM) model anchored in long-term revenue-share contracts has drawn criticism for limited transparency and loss of institutional control.

    Our own data suggest that this critique is reshaping practice. Universities are moving from outsourcing to empowerment: seeking education-management partners who enhance internal capability rather than replace it. This evolution from OPMs to Education Management Partners (EMPs) marks a decisive turn toward collaborative, capacity-building relationships.

    The Learning Experience Suite fits squarely within this new model. It is not an outsourced service but a connective layer that enables institutions to manage their digital ecosystems with greater visibility and confidence, while benefiting from enterprise-grade integration and security. It exemplifies partnership as a mechanism for capability development, a move from vendor management to shared strategic growth.

    From fragmentation to fluency

    Many institutions remain caught in what might be called digital fragmentation. According to our survey, nearly half of leaders cite data silos, disconnected platforms, and inconsistent learner experiences as obstacles to progress. These are not isolated technical issues; they are systemic barriers that affect pedagogy, governance, and institutional trust.

    Tamim’s framework describes such misalignment as a state of “disequilibrium.” Overcoming it requires coordinated action across levels, strategic clarity from leadership, adaptive management structures, and interoperable tools that make integration intuitive. The objective is to move from digital accumulation to digital fluency: an environment where technology amplifies, rather than fragments, institutional purpose.

    Learning Experience Suite was designed precisely to address this. By connecting data across systems, enabling real-time analytics, and ensuring accessibility through a mobile-first design, it allows institutions to build coherence and confidence in their digital operations.

    Building partnerships

    The next phase of higher education technology will be defined not by the tools universities choose but by the quality of their partnerships. As scholars like Sun have cautioned, outsourcing core academic functions without transparency can erode autonomy. Conversely, partnerships grounded in shared governance, open data, and aligned values can strengthen the academic mission.

    For Boundless Learning, this is the central opportunity of the coming decade: to reimagine partnership as co-evolution. Universities, platforms, and providers function best as interconnected actors within a wider learning system, each contributing expertise to advance learner success and institutional resilience.

    When viewed through a systems lens, the key question is no longer whether universities should outsource, but how they orchestrate. The challenge is to combine the right mix of internal capability, external expertise, and interoperable technology to achieve measurable impact.

    That, ultimately, is what digital maturity requires and what the Learning Experience Suite was designed to deliver.

    Source link

  • Does ‘less is more’ apply to tech companies?

    Does ‘less is more’ apply to tech companies?

    On 20 October 2025, an Amazon Web Services daylong outage left millions of people around the world unable to communicate electronically and hurt the operations of more than 1,000 companies. 

    Snapchat, Canva, Slack and Reddit were rendered useless while the businesses of gaming platforms Fortnight and Roblox, bankers Lloyds and Halifax and U.S. airlines Delta and United were disrupted. Media companies including the New York Times, the Wall Street Journal and Disney were also impacted.

    Amazon Web Service, or AWS, handles the backbone work of tools and computers allowing about 37% of the internet to work. It is the dominant player for cloud servers but the alternatives are equally large giants — Microsoft’s Azure and Google’s Cloud Platform. 

    The outage prompted European officials to call for plans for digital sovereignty and less reliance on U.S. behemoths. It was also a wakeup call to internet users worldwide of the fragility of the infrastructure and how much they rely on digital technology for everyday work and personal tasks from ordering coffee and communicating with colleagues to checking in airline flights and home security cameras to playing games, doing homework and shopping online.

    And it shined a light on how much the technology we rely on is controlled by oligopolies. Many people are familiar with the idea of a monopoly. That’s where one company or entity controls the market for a specific product or service and no competition is allowed. An oligopoly is a market structure when a small number of large firms dominate an industry, limiting competition. 

    Who controls the technology we use?

    What happened with the glitch at AWS showed the dangers of too much control in too few hands, but are there benefits we get from monopolies and oligopolies? How does competition — or the lack of it — affect what we consume? 

    A monopoly allows the company or entity to control the quality and prices of the product and services but the lack of competition might lead to less incentives to improve the product and prices might continually rise. 

    An example of a monopoly might be your local city or town provider of water, gas or electricity. The United States Postal Service is protected by U.S. law to be a protected monopoly to handle and deliver non-urgent letters.

    With oligopolies there is some competition, but consumers have a smaller choice and the major players rely on each other since one company’s actions could impact the others. An example of an oligopoly could be the airlines in your country where a few airlines largely control domestic and international flights. 

    Oligopolies generally emerge in industries with large start-up costs and strict legislation, allowing the oligopolies to keep prices high with virtually no new competition. 

    Benefits to concentrated ownership

    On the plus side, oligopolies tend to bring stability to their markets. An example of an oligopoly is OPEC, the Organization of Petroleum Exporting Countries, where 12 member countries each hold substantial market share in the supply of oil and control oil prices by raising or lowering output.

    When there is direct competition in business, companies selling similar products or services vie for more sales and share of the market, and profit by marketing their products on price, quality and promotions. This can lead to more innovation for product or services improvement and more company efficiencies to spur customer demand. But on the negative side, price wars may erupt and there could be consumer confusion over different brands. For example, Coca-Cola and Pepsi are direct competitors.

    University of California San Diego Economics Professor Marc Muendler noted that while the AWS outage negatively impacted people and businesses globally, it would be difficult for corporate clients to unwind from it, let alone find an immediate replacement because AWS offers a customized service specific to contracts.

    “Switching costs can be immense,” Muendler added.

    Muendler said for other oligopolies such as gas suppliers, airlines or even yogurt makers, prices might become somewhat elevated if the number of players is too small. An extreme might be duopolies, where two companies dominate sales of a product or service, such as when ski resorts are owned largely by two companies and can keep ski lift ticket prices high, he said.

    When big providers start having problems, that gives smaller players an opportunity.

    “It will always be hard to be the runner-up in a market with scale economies, where first movers get ahead fast,” he said. “[But] there’s a large segment of retail stores that don’t have specific contracts [with AWS]. That might be a market segment for a new competitor serving smaller customers, and then scale up.”

    Muendler said AWS clients should know they have a single supplier and be aware of the risks. 

    “I don’t see this market as easily reformable,” he said. “A big unanswered question is: How do we build resilience into our supply chains? There have been lots of disruptions to the global economy in the past 10–15 years. How do we incentivize companies that need specialized suppliers to also have redundancies,” or backup plans?


    Questions to consider:

    1. Identify a company, utility or other entity in your town or city. Is it a monopoly, oligopoly or does it compete directly with others? 

    2. What are the pluses or minuses for your family as consumers of its product or services?

    3. What are the key differences for an employee who works at a monopoly vs. oligopoly vs. direct competitors? 


     

    Source link

  • Can journalists coexist with AI?

    Can journalists coexist with AI?

    But then the same thing could be happening now to the heads of news organizations who then subsequently pull back their journalists from various news beats. Since those news organizations are the ones who report news, would we ever know that was happening?

    The reality is that artificial intelligence could kill journalism without replacing it, leaving people without information they can rely on. When there are no reliable, credible sources of news, rumors spread and take on a life of their own. People panic and riot and revolt based on fears born from misinformation. Lawlessness prevails.

    Do algorithms have all the answers?

    Right now, entire news organizations are disappearing. The Brookings report found that last year some 2.5 local news outlets folded every week in the United States. Data collected by researcher Amy Watson in August 2023 found that in the UK, each year over a 10-year period ending 2022, more news outlets closed than were launched.

    CNN reported in June 2023 that Germany’s biggest news organization, Bild, was laying off 20% of its employees, replacing them with artificial intelligence.

    But ChatGPT had this to say: “ Rather than viewing AI as a threat, journalists can leverage technology to enhance their work. Automated tools can assist with tasks such as data analysis, fact-checking and content distribution, freeing up time for reporters to focus on more complex and impactful storytelling.”

    One of News Decoder’s many human correspondents, Tom Heneghan, spoke to students on this topic in November and expressed some optimism.

    “It will take away a lot of the drudge work, the donkey work that journalists have to do,” Heneghan said. “It’s amazing how much work is done by somebody at a much higher level than what is actually needed.”

    Working with artificial intelligence

    Once those tasks are automated, the journalist can pursue more substantive stories, Heneghan said. Plus the evolving sophistication of things like deep fake technology will make tasks like fact-checking and verification more important. “

    That’s going to come up more and more,” Heneghan said. “What artificial intelligence takes away may actually create some other jobs.”

    So here’s the thing: We wouldn’t have to fear AI eliminating the crucial role of journalism — informing the public with accurate information, reporting from multiple perspectives so that minority voices are heard and uncovering corruption, exploitation and oppression — if the businesses that controlled the purse strings of journalism were committed to its public service functions.

    I then asked ChatGPT this question: Are media corporations driven solely by money?

    It concluded: “While financial considerations undoubtedly influence the actions of media corporations, they are not the sole driving force behind their decisions.” It went on: “A complex interplay of financial goals, societal responsibilities and individual values shapes the behavior of these entities.

    Understanding this multifaceted nature is essential for accurately assessing the role and impact of media corporations in modern society.” I found that reassuring, until I glanced at the disclaimer at the bottom of the AI’s page:

    ChatGPT can make mistakes. Consider checking important information.


    Questions to consider:

    1. What is an essential role of journalism in society?

    2. What did both the ChatGPT app and the human correspondent seem to agree on in this article?

    3. What, if anything, worries you about artifiical intelligence and how you get your information?


    Source link

  • Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

    These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

    As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

    What the evidence shows so far

    Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

    For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

    A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

    But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

    When AI backfires

    There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

    Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

    Not all AI tools work the same way

    The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

    We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

    There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

    These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

    Why it matters for all of us

    The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

    But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

    For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

    Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link