Category: AI

  • USyd responds to student concerns about ‘two-lane’ AI policy – Campus Review

    USyd responds to student concerns about ‘two-lane’ AI policy – Campus Review

    The university arguably leading the sector in its use of artificial intelligence (AI) in assessment tasks has received criticism from some students who have complained they lost marks for not using AI in a test.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • What does it mean if students think that AI is more intelligent than they are?

    What does it mean if students think that AI is more intelligent than they are?

    The past couple of years in higher education have been dominated by discussions of generative AI – how to detect it, how to prevent cheating, how to adapt assessment. But we are missing something more fundamental.

    AI isn’t just changing how students approach their work – it’s changing how they see themselves. If universities fail to address this, they risk producing graduates who lack both the knowledge and the confidence to succeed in employment and society. Consequently, the value of a higher education degree will diminish.

    In November, a first-year student asked me if ChatGPT could write their assignment. When I said no, they replied: “But AI is more intelligent than me.” That comment has stayed with me ever since.

    If students no longer trust their own ability to contribute to discussions or produce work of value, the implications stretch far beyond academic misconduct. Confidence is affecting motivation, resilience and self-belief, which, consequently, effects sense of community, assessment grades, and graduate skills.

    I have noticed that few discussions focus on the deeper psychological shift – students’ changing perceptions of their own intelligence and capability. This change is a key antecedent for the erosion of a sense of community, AI use in learning and assessment, and the underdevelopment of graduate skills.

    The erosion of a sense of community

    In 2015 when I began teaching, I would walk into a seminar room and find students talking to one another about how worried they were for the deadline, how boring the lecture was, or how many drinks they had Wednesday night. Yes, they would sit at the back, not always do the pre-reading, and go quiet for the first few weeks when I asked a question – but they were always happy to talk to one another.

    Fast forward to 2025, campus feels empty, and students come into class and sit alone. Even final years who have been together for three years, may sit with a “friend” but not really say anything as they stare at phones. I have a final year student who is achieving first class grades, but admitted he has not been in the library once this academic year and he barely knows anyone to talk to. This may not seem like a big thing, but it illustrates the lack of community and relationships that are formed at university. It is well known that peer-to-peer relationships are one of the biggest influencers on attendance and engagement. So when students fail to form networks, it is unsurprising that motivation declines.

    While professional services, student union, and support staff are continuously offering ways to improve the community, at a time where students are working longer hours and through a cost of living, we cannot expect students to attend extracurricular academic or non-academic activities. Therefore, timetabled lectures and seminars need to be at the heart of building relationships.

    AI in learning and assessment

    While marking first-year marketing assignments – a subject I’ve taught across multiple universities for a decade – I noticed a clear shift. Typically, I expect a broad range of marks, but this year, students clustered at two extremes: either very high or alarmingly low. The feedback was strikingly similar: “too vague,” “too descriptive,” “missing taught content.”

    I knew some of these students were engaged and capable in class, yet their assignments told a different story. I kept returning to that student’s remark and realised: the students who normally land in the middle – your solid 2:2 and 2:1 cohort – had turned to AI. Not necessarily to cheat, but because they lacked confidence in their own ability. They believed AI could articulate their ideas better than they could.

    The rapid integration of AI into education isn’t just changing what students do – it’s changing what they believe they can do. If students don’t think they can write as well as a machine, how can we expect them to take intellectual risks, engage critically, or develop the resilience needed for the workplace?

    Right now, universities are at a crossroads. We can either design assessments as if nothing has changed, pivot back to closed-book exams to preserve “authentic” academic work, or restructure assessment to empower students, build confidence, and provide something of real value to both learners and employers. Only the third option moves higher education forward.

    Deakin University’s Phillip Dawson has recently argued that we must ensure assessment measures what we actually intend to assess. His point resonated with me.

    AI is here to stay, and it can enhance learning and productivity. Instead of treating it primarily as a threat or retreating to closed-book exams, we need to ask: what do we really need to assess? For years, we have moved away from exams because they don’t reflect real-world skills or accurately measure understanding. That reasoning still holds, but the assessment landscape is shifting again. Instead of focusing on how students write about knowledge, we should be assessing how they apply it.

    Underdevelopment of graduate skills

    If we don’t rethink pedagogy and assessment, we risk producing graduates who are highly skilled at facilitating AI rather than using it as a tool for deeper analysis, problem-solving, and creativity. Employers are already telling us they need graduates who can analyse and interpret data, think critically to solve problems, communicate effectively, show resilience and adaptability, demonstrate emotional intelligence, and work collaboratively.

    But students can’t develop these skills if they don’t believe in their own ability.

    Right now, students are using AI tools for most activities, including online searching, proof reading, answering questions, generating examples, and even writing reflective pieces. I am confident that if I asked first years to write a two-minute speech about why they came to university, the majority would use AI in some way. There is no space – or incentive – for them to illustrate their skill development.

    This semester, I trialled a small intervention after getting fed up with looking at heads down in laptops. I asked my final year students to put laptops and phones on the floor for the first two hours of a four-hour workshop.

    At first, they were visibly uncomfortable – some looked panicked, others bored. But after ten minutes, something changed. They wrote more, spoke more confidently, and showed greater creativity. As soon as they returned to technology, their expressions became blank again. This isn’t about banning AI, but about ensuring students have fun learning and have space to be thinkers, rather than facilitators.

    Confidence-building

    If students’ lack of confidence is driving them to rely on AI to “play it safe”, we need to acknowledge the systemic problem. Confidence is an academic issue. Confidence underpins everything in the student’s experience: classroom engagement, sense of belonging, motivation, resilience, critical thinking, and, of course, assessment quality. Universities know this, investing in mentorship schemes, support services, and initiatives to foster belonging. But confidence-building cannot be left to professional services alone – it must be embedded into curriculum design and assessment.

    Don’t get me wrong, I am fully aware of the pressures of academic staff, and telling them to improve sense of community, assessment, and graduate skills feels like another time-consuming task. Universities need to recognise that without improving workload planning models to allow academics freedom to focus on and explore pedagogic approaches, we fall into the trap of devaluing the degree.

    In addition, universities want to stay relevant, they need agile structures that allow academics to test new approaches and respond quickly, just like the “real world”. Academics should not be creating or modifying assessments today that won’t be implemented for another 18 months. Policies designed to ensure quality must also ensure adaptability. Otherwise, higher education will always be playing catch-up – first with AI, then with whatever comes next.

    Will universities continue producing AI-dependent graduates, or will they equip students with the confidence to lead in an AI-driven world?

    Source link

  • How our researchers are using AI – and what we can do to support them

    How our researchers are using AI – and what we can do to support them

    We know that the use of generative AI in research is now ubiquitous. But universities have limited understanding of who is using large language models in their research, how they are doing so, and what opportunities and risks this throws up.

    The University of Edinburgh hosts the UK’s first, and largest, group of AI expertise – so naturally, we wanted to find out how AI is being used. We asked our three colleges to check in on how their researchers were using generative AI, to inform what support we provide, and how.

    Using AI in research

    The most widespread use, as we would expect, was to support communication: editing, summarising and translating texts or multimedia. AI is helping many of our researchers to correct language, improve clarity and succinctness, and transpose text to new mediums including visualisations.

    Our researchers are increasingly using generative AI for retrieval: identifying, sourcing and classifying data of different kinds. This may involve using large language models to identify and compile datasets, bibliographies, or to carry out preliminary evidence syntheses or literature reviews.

    Many are also using AI to conduct data analysis for research. Often this involves developing protocols to analyse large data sets. It can also involve more open searches, with large language models detecting new correlations between variables, and using machine learning to refine their own protocols. AI can also test complex models or simulations (digital twins), or produce synthetic data. And it can produce new models or hypotheses for testing.

    AI is of course evolving fast, and we are seeing the emergence of more niche and discipline-specific tools. For example, self taught reasoning models (STaRs) can generate rationales that can be fine-tuned to answer a range of research questions. Or retrieval augmented generation (RAG) can enable large language models to access external data that enhances the breadth and accuracy of their outputs.

    Across these types of use, AI can improve communication and significantly save time. But it also poses significant risks, which our researchers were generally alert to. These involve well-known problems with accuracy, bias and confabulation – especially where researchers use AI to identify new (rather than test existing) patterns, to extrapolate, or to underpin decision-making. There are also clear risks around sharing of intellectual property with large language models. And not least, researchers need to clearly attribute the use of AI in their research outputs.

    The regulatory environment is also complex. While the UK does not as yet have formal AI legislation, many UK and international funders have adopted guidelines and rules. For example, the European Union has a new AI Act, and EU funded projects need to comply with European Commission guidelines on AI.

    Supporting responsible AI

    Our survey has given us a steer on how best to support and manage the use of AI in research – leading us to double down on four areas that require particular support:

    Training. Not surprisingly the use of generative AI is far more prevalent among early career researchers. This raises issues around training, supervision and oversight. Our early career researchers need mentoring and peer support. But more senior researchers don’t necessarily have the capacity to keep pace with the rapid evolution of AI applications.

    This suggests the need for flexible training opportunities. We have rolled out a range of courses, including three new basic AI courses to get researchers started in the responsible use of AI in research, and online courses on ethics of AI.

    We are also ensuring our researchers can share peer support. We have set up an AI Adoption Hub, and are developing communities of practice in key areas of AI research – notably research in AI and Health which is one of the most active areas of AI research. A similar initiative is being developed for AI and Sustainability.

    Data safety. Our researchers are rightly concerned about feeding their data into large language models, given complex challenges around copyright and attribution. For this reason, the university has established its own interface with the main open source large language models including ChatGPT – the Edinburgh Language Model (ELM). ELM provides safer access to large language model, operating under a “zero data retention” agreement so that data is not retained by Open AI. We are encouraging our researchers to develop their own application programming interfaces (APIs), which allow them to provide more specific instructions to enhance their results.

    Ethics. AI in research throws up a range of challenges around ethics and integrity. Our major project on responsible AI, BRAID, and ethics training by the Institute for Academic Development, provide expertise on how we adapt and apply our ethics processes to address the challenges. We also provide an AI Impact Assessment tool to help researchers work through the potential ethical and safety risks in using AI.

    Research culture. The use of AI is ushering in a major shift in how we conduct research, raising fundamental questions about research integrity. When used well, generative AI can make researchers more productive and effective, freeing time to focus on those aspects of research that require critical thinking and creativity. But they also create incentives to take short cuts that can compromise the rigour, accuracy and quality of research. For this reason, we need a laser focus on quality over quantity.

    Groundbreaking research is not done quickly, and the most successful researchers do not churn out large volumes of papers – the key is to take time to produce robust, rigorous and innovative research. This is a message that will be strongly built into our renewed 2026 Research Cultures Action Plan.

    AI is helping our researchers drive important advances that will benefit society and the environment. It is imperative that we tap the opportunities of AI, while avoiding some of the often imperceptible risks in its mis-use. To this end, we have decided to make AI a core part of our Research and Innovation Strategy – ensuring we have the right training, safety and ethical standards, and research culture to harness the opportunities of this exciting technology in an enabling and responsible way.

    Source link

  • SMART Technologies Launches AI Assist in Lumio to Save Teachers Time

    SMART Technologies Launches AI Assist in Lumio to Save Teachers Time

    Lumio by SMART Technologies, a cloud-based learning platform that enhances engagement on student devices, recently announced a new feature for its Spark plan. This new offering integrates AI Assist, an advanced tool designed to save teachers time and elevate student engagement through AI-generated quiz-based activities and assessments.

    Designing effective quizzes takes time—especially when crafting well-balanced multiple-choice questions with plausible wrong answers to encourage critical thinking. AI Assist streamlines this process, generating high-quality quiz questions at defined levels in seconds so teachers can focus on engaging their students rather than spending time on quiz creation.

    More News from eSchool News

    HVAC projects to improve indoor air quality. Tutoring programs for struggling students. Tuition support for young people who want to become teachers in their home communities.

    Almost 3 in 5 K-12 educators (55 percent) have positive perceptions about GenAI, despite concerns and perceived risks in its adoption, according to updated data from Cengage Group’s “AI in Education” research series.

    Our school has built up its course offerings without having to add headcount. Along the way, we’ve also gained a reputation for having a wide selection of general and advanced courses for our growing student body.

    When it comes to visual creativity, AI tools let students design posters, presentations, and digital artwork effortlessly. Students can turn their ideas into professional-quality visuals, sparking creativity and innovation.

    Ensuring that girls feel supported and empowered in STEM from an early age can lead to more balanced workplaces, economic growth, and groundbreaking discoveries.

    In my work with middle school students, I’ve seen how critical that period of development is to students’ future success. One area of focus in a middle schooler’s development is vocabulary acquisition.

    For students, the mid-year stretch is a chance to assess their learning, refine their decision-making skills, and build momentum for the opportunities ahead.

    Middle school marks the transition from late childhood to early adolescence. Developmental psychologist Erik Erikson describes the transition as a shift from the Industry vs. Inferiority stage into the Identity vs. Role Confusion stage.

    Art has a unique power in the ESL classroom–a magic that bridges cultures, ignites imagination, and breathes life into language. For English Language Learners (ELLs), it’s more than an expressive outlet.

    In the year 2025, no one should have to be convinced that protecting data privacy matters. For education institutions, it’s really that simple of a priority–and that complicated.

    Want to share a great resource? Let us know at submissions@eschoolmedia.com.

    Source link

  • Innovation Without Borders: Galileo’s Networked Approach to Better Higher Education System

    Innovation Without Borders: Galileo’s Networked Approach to Better Higher Education System

    One of the biggest, but least remarked upon trends in European higher education in recent years is the growth of private for-profit, higher education. Even in countries where tuition is free, there are hundreds of thousands of students who now prefer to take courses at private for-profit institutions.

    To me, the question is, why? What sort of institutions are these anyway? Interestingly, the answer to that second question is one which might confuse my mostly North American audience. Turns out a lot of these private institutions are relatively small, bespoke institutions with very narrow academic specializations. And yet they’re owned by a few very large international conglomerate universities. That’s very different from North America, where institutions tend to be either small and bespoke, or part of a large corporation, but not both.

    Today my guest is Nicolas Badré. He’s the Chief Operating Officer of the Galileo Group, which operates a number of universities across Europe. I met him a few months ago at an OECD event in Jakarta. When I heard about some of Galileo’s initiatives, I knew I’d have to have him on the show. 

    There are three things which I think are most important about this interview. First is the discussion about Galileo’s business model and how it achieves economies of scale across such different types of institutions. Second, there’s how the network goes about collectively learning across all its various institutions. And third, specifically how it’s choosing to experiment with AI across a number of institutions and apply the lessons more globally. 

    Overall, it’s a fascinating chat. I hope you enjoy it too. But now, let’s turn things over to Nicolas.


    The World of Higher Education Podcast
    Episode 3.27 | Innovation Without Borders: Galileo’s Networked Approach to Better Higher Education System

    Transcript

    Alex Usher (AU): Nicolas, Galileo Global Education has grown significantly over the years. I think the group is, if I’m not mistaken, 13 or 14 years old now. Some of the universities it owns might be a bit older, but can you walk us through the origins of the group? How did you grow to be as big as you are? I think you’ve got dozens of institutions in dozens of countries—how did that growth happen so quickly?

    Nicolas Badré (NB): Thank you, Alex, for the question. It’s an interesting story. And yes, to your point, the group was created 13 and a half years ago, with an investment by Providence Equity Partners into Istituto Marangoni, a fashion school in Italy. That dates back to 2011. Since then, we’ve made 30 acquisitions.

    The growth started primarily in Europe, especially in France and Germany. Then, in 2014, we took our first steps outside of Europe with the acquisition of IEU in Mexico. Significant moves followed in 2018 and 2019, particularly into the online learning space with Studi in France and AKAD in Germany.

    There’s been a very rapid acceleration over the past five years. For context, I joined the group at the end of 2019. At that time, Galileo had 67,000 students across nine countries. Today, we have 300,000 students in 20 countries.

    Back then, the group was primarily focused on arts and creative schools, as well as business and management schools. Now, we’ve expanded into tech and health, and even into some professional training areas—like truck driving, for instance.

    What does this reflect? Two things. First, very strong organic growth from our existing schools and brands. Take ESG in France as an example. It’s been around for 40 years and is a well-known entry-level business school. Over the past five years, it’s diversified considerably creating ESG Luxury, ESG Tourism, you name it. It’s also expanded its physical presence from just a few cities to now being in 15 or 16 cities across France.

    So it’s really been a combination of strong organic growth and selective acquisitions that have helped us more than quadruple our student numbers in just five years.

    AU: It’s interesting— I think a lot of our listeners and viewers might be surprised to hear about such a strong for-profit institution coming out of France. When you think of French higher education, you think of the Grandes Écoles, you think of free education. So why would so many people choose to pay for education when they don’t have to? It’s a pretty strong trend in France now. I think over 26% of all students in France are in some form of private higher education. What do you offer that makes people willing to give up “free”?

    NB: It’s a good question, and you’re right—it’s not just about France. In many places across Europe, including Germany, the Nordics, and others, you see similar dynamics.

    That said, yes, in France in particular, there’s been a growing share of private players in higher education over the past few years. That probably reflects the private sector’s greater ability to adapt to new environments.

    I’d highlight three main factors that help explain why we’ve been successful in this space.

    First, we’re obsessed with employability and skills-based education. And that’s true across all levels and backgrounds. When we worked on our group mission statement, everyone agreed that our mission is to “unleash the potential of everyone for better employability.” 

    Because of that focus, we maintain very strong ties with industry. That gives us the ability to adapt, create, and update our programs very quickly in response to emerging demands. We know competencies become obsolete faster now, so staying aligned with job market needs is critical. That’s probably the strongest unifying driver across all of Galileo.

    Beyond that, we also offer very unique programs. Take Noroff, for example—a tech school in Norway, which is even more tuition-free than France. It’s one of the very few fee-paying institutions in the country. But the program is so strong that students are willing to pay around 15,000 euros a year because they know they’ll get a top-tier, hands-on experience—something that might be slower to evolve in the public system.

    So that’s the first point: employability and unique, high-impact programs.

    Second, we put a strong emphasis on the student experience. How do we transform their education beyond just delivering content? That’s an area we continue to invest in—never enough, but always pushing. We’re focused on hybridizing disciplines, geographies, and pedagogical approaches.

    And we’ve systematized student feedback—not just asking for opinions, but making sure we translate that feedback into tangible improvements in the student experience.

    And third, I’d say there’s a values-based dimension to all of this. We focus heavily on innovation, entrepreneurship, and high standards. Those are the core values that we’re driven by. You could say they’re our obsessions—and I think that kind of vision and energy resonates with our students. Those are the three main things I’d point to.

    AU: I have a question about how you make things work across such a diverse set of institutions. I mean, you’ve got design schools, drama schools, law schools, medical schools. When people think about private education, there’s often an assumption that there’s some kind of economies of scale in terms of curriculum. The idea that you can reuse curriculum across different places. But my impression is that you can’t do that very much. It seems like you’re managing all these different institutions, each of them like their own boutique operation, with their own specific costs. How do you make it work across a system as large and diverse as yours? Where are the economies of scale?

    NB: Well, that’s also a very good point—and you’re absolutely right. We have a very diverse network of schools. We have a culinary arts school in Bordeaux, France, with maybe 400 students, and we have universities with more than 10,000 students, whether in medical or business education.

    So yes, you might wonder: why put these institutions together?

    The answer is that we really built the group’s development around the entrepreneurial DNA of our school directors. They’re responsible for their own development—for their growth, diversification, and how they respond to the job market.

    We’re not obsessed with economies of scale. What we really value is the network itself. What we focus on is shared methodology—in areas like sales and marketing, finance, HR, and student experience.

    There are also some opportunities for synergies in systems. In some cases, for instance, yes—we use a similar CRM across several countries. But I think the real value of the network lies in its ability to share experiences and experiment with innovation throughout, and then scale up those innovations appropriately across the other schools.

    So I’d say it’s more about shared practices than about forcing economies of scale across borders—because that doesn’t always make sense.

    AU: Am I correct in thinking that you don’t necessarily present yourself as a chain of institutions to students? That each institution actually has a pretty strong identity in and of itself—is that right? Is there a fair bit of autonomy and ability to adapt things locally at each of your schools?

    NB: Yes, I think that’s true. In terms of branding, we believe that each of our schools generally has a stronger brand than Galileo itself. And that’s how it should be, because each school has its own experience, its own DNA, its own momentum and development.

    So, we see ourselves more as a platform that supports the development of all these schools, rather than a chain imposing the same standards and practices across the board.

    Of course, we do have certain methodologies—for example, how to run a commercial campaign. We provide guidance, but it’s ultimately up to each school to manage that process and use the methodology in a way that works best for their own development.

    That doesn’t mean there’s no value in having the Galileo name—there is. But the value is in being a platform that supports the schools, rather than overshadowing them.

    AU: Nicolas, I know Galileo is testing a lot of AI-driven approaches across its various institutions. What I found interesting in a discussion we had offline a few weeks ago is that you’re experimenting with AI in different parts of the institution—some of it around curriculum, some around administration, and some around student services. Can you give us an overview? What exactly are you testing, and what are the goals of these experiments?

    NB: I think we first need to frame how we’re using AI, and it’s important to look at our strategy globally. We believe there are three major trends shaping higher education.

    First, student expectations are evolving quickly—they’re demanding more flexibility and personalization. Second, there’s a rapid emergence of new competencies, which challenges our ability to adapt and update programs quickly. And third, we need to go beyond boundaries and be agile in how we approach topics, address new skills, and serve diverse learners. These are the three starting points we see as opportunities for Galileo to differentiate itself. Now, we’re not trying to become a leading AI company. Our goal remains to be a recognized leader in education—improving employability and lives. That’s our benchmark.

    With that in mind, our AI vision is focused on four areas:

    1. How do we deliver a unique experience to our students?
    2. How do we connect educators globally who are trained in and comfortable with AI?
    3. How do we develop content that can be adapted, localized, translated, and personalized?
    4. And how do we improve operational productivity?

    AI is clearly a powerful tool in all four areas. Let me walk through some of the things we’re doing. 

    The first area we call AI for Content. We’re using AI to more quickly identify the competencies required by the job market. We use tools that give us a more immediate connection to the market to understand what skills are in demand. Based on that, we design programs that better align with those needs.

    Then the next step is about course and content creation. Once we’ve defined the competencies, how do we design the courses, the pedagogical materials? How do we make it easier to localize and adapt that content?

    Take Studi, an online university in France with 67,000 students and around 150 different programs. A year ago, it would take them about four months to design a bachelor’s or master’s program. Now, it takes one to two months, depending on the specifics. The cost has been cut in half, and development speed has increased by a factor of two, three, even four in some cases. This also opens up opportunities to make programs more personalized because we can update them much faster. 

    The second area is AI for Experience. How do we use AI to enhance the student experience?

    We’ve embedded AI features in our LMS to personalize quizzes, generate mind maps, and create interactive sessions during classes. We’ve also adapted assessments. For example, in Germany, for the past two years, our online university AKAD has let students choose their own exam dates. That’s based on an AI approach that generates personalized assessments while staying within the requirements of German accreditation bodies. This wouldn’t be possible without AI. The result is higher engagement, faster feedback, and a more personalized learning experience.

    Lastly, beyond content and experience, we’re seeing real gains in AI for Operations. In sales and marketing, for example, we now use bots in Italy and Latin America to re-engage “dead” leads—contacting them again, setting up meetings, and redirecting them through the admissions funnel. It’s proven quite efficient, and we’re looking to expand that approach to other schools.

    We’re also seeing strong results in tutoring. Take Corndel, a large UK-based school focused on apprenticeships. They’re using AI tools extensively to improve student tracking, tutoring, and weekly progress monitoring.

    So, we’re seeing a lot of momentum across all these dimensions—and it’s really picked up speed over the last 18 months.

    AU: So, you’ve got a network of institutions, which gives you a lot of little laboratories to experiment with—to try different things. How do you identify best practices? And then how do you scale them across your network?

    NB: Well, first of all, we have lots of different pilots. As you’ve understood, we’re quite decentralized, so we don’t have a central innovation team of 50 people imposing innovations across all our schools.

    It’s more about scouting and sharing experiences from one school to another. It’s a combination of networks where people share what they’re learning.

    Just to name a few, we have a Digital Learning Community—that’s made up of all the people involved in LMS design across our schools. They exchange a lot of insights and experiences.

    We also hold regular touchpoints to present what’s happening in AI for content, AI for experience, and AI for operations. We’ve created some shared training paths for schools as well. So there are a lot of initiatives aimed at maximizing sharing, rather than imposing anything top-down. Again, as you pointed out, the schools are extremely diverse—in terms of regulations, size, content, and disciplines. So there’s no universal recipe.

    That said, in some cases it’s more about developing a methodology. For example, how do you design and implement a pedagogical chatbot? The experiments we’re running now are very promising for future scale-up, because we’re learning a lot from these developments.

    AU: I know that, in a sense, you’ve institutionalized the notion of innovation within the system. I think you’ve recently launched a new master’s program specifically focused on this question—on how to innovate in education systems. Can you tell us a little bit about that?

    NB: Yeah, I’m super excited to talk about this, because it’s where I’m focusing most of my energy these days.

    We’ve been working on this project for a year with four Galileo institutions. It’s called Copernia, and the name, like Galileo, is intentional—these are people who changed perspectives. That’s exactly what we want to do: change the perspective on education and truly put the student at the center.

    Copernia started the initiative, Galileo confirmed it, and it’s no coincidence we’re focusing on this.

    The first program we’re launching under Copernia is a Master of Innovation and Technology for Education. The idea is to bring together and leverage expertise from several fields: neurocognitive science, tech, AI and data, educational sciences, innovation, design, and management. The goal is to offer students a unique experience where they not only learn about innovation—but also learn to develop and apply it.

    One of the major assets we want to leverage is the Galileo network. With over 120 campuses, we can offer students real, hands-on opportunities to experiment and innovate. So the value proposition is: if you want to design and test educational innovation, we’ll give you the tools, the foundational knowledge, and, most importantly, the chance to apply that in practice—within our network, with our partners, and with other institutions.

    The goal is to help the whole ecosystem benefit—not just from Galileo’s environment, but also from the contributions of tech partners, academic collaborators, and business partners around the world. I’m convinced this will be a major tool to develop, share, and scale practical, applied innovation.

    And importantly, this isn’t meant to be just an internal initiative for Galileo. It’s designed to be open. We want to train people who can help transform education—not only in higher education, but also in K–12 and lifelong learning. Because we believe this kind of cross-disciplinary expertise and hands-on innovation experience is valuable across the entire education sector.

    AU: I’m really impressed with the scale and speed at which you’re able to experiment. But it did make me wonder—why can’t public higher education systems do the same? I mean, if I think about French universities, there are 70 or 80 in the public system—though it’s hard to keep track because they keep merging. But theoretically, they could do this too, couldn’t they? It’s a moderately centralized system, and there’s no reason institutions couldn’t collaborate in ways that let them identify useful innovations—rolling them out at different speeds in different areas, depending on what works. Why can’t the public sector innovate like that?

    NB: First of all, I wouldn’t make a sweeping judgment on this. I think there is innovation happening everywhere—including within public institutions. So I wouldn’t describe it in black-and-white terms.

    That said, it’s true that as a private organization, we face a certain kind of pressure. We need to prove that we operate a sustainable model—and we need to prove that every month. In other words, we rely on ourselves to develop, to test, and to optimize how we grow. 

    The second is that we have an asset in being able to test and learn in very different environments. Take the example I mentioned earlier, about Germany and the anytime online assessments. We were able to implement that model there because it was online and because the regulatory environment allowed it.

    Now, when we approach accreditation bodies in other countries, we can say: “Look, it works. It’s already accepted elsewhere. Why not consider it here?” That ability to move between different contexts—academic and professional, vocational and executive—is really valuable. It allows us to promote solutions that cross traditional boundaries.

    That’s not something all public universities can do—and frankly, not something all universities can do, period. But it’s an advantage we’ve built over the past several years by creating this large field for experimentation.

    AU: Nicolas, thank you so much for being with us today.

    NB: Alex, thank you very much. It’s been a pleasure.

    AU: It just remains for me to thank our excellent producers, Tiffany MacLennan and Sam Pufek, and to thank you—our viewers, listeners, and readers—for joining us. If you have any questions about today’s podcast, please don’t hesitate to get in touch at podcast@higheredstrategy.com. And don’t forget—never miss an episode of The World of Higher Education Podcast. Head over to YouTube and subscribe to our channel. Join us next week when our guest will be Noel Baldwin, CEO of the Future Skills Centre here in Canada. He’ll be joining us to talk about the Programme for the International Assessment of Adult Competencies. See you then.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service. Please note, the views and opinions expressed in each episode are those of the individual contributors, and do not necessarily reflect those of the podcast host and team, or our sponsors.

    This episode is sponsored by Studiosity. Student success, at scale – with an evidence-based ROI of 4.4x return for universities and colleges. Because Studiosity is AI for Learning — not corrections – to develop critical thinking, agency, and retention — empowering educators with learning insight. For future-ready graduates — and for future-ready institutions. Learn more at studiosity.com.

    Source link

  • AI in private vs public higher education sector – Episode 164 – Campus Review

    AI in private vs public higher education sector – Episode 164 – Campus Review

    Partner at consultant KordaMentha John Dewar led a panel of public and private university leaders that re-examined the sector’s current artificial intelligence (AI) strategies and opportunities.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • New Way to Teach Writing by Incorporating AI – Sovorel

    New Way to Teach Writing by Incorporating AI – Sovorel

    AI is here, and it is here to stay, which means that academia needs to incorporate it so that students learn about AI’s capability and are ready to use it properly. The most complained about issue in writing classes today is that students simply use AI to write their essays for them and, in the process, do not learn anything and use AI improperly. “The Anders 4 Phase AI Method of Writing Instruction,” is able to overcome these issues. This instructional method develops students’ writing skills while teaching AI literacy, which includes critical thinking. Different aspects of this method can also be applied to other courses/assignments. The Anders 4 Phase AI Method of Writing Instruction is a much-needed new way to develop writing in a way that better aligns with the new realities of how many people are already writing with AI.

    Key Components (the four phases):

    1. Foundational Writing Skills Development: instruction and assessment on key aspects of writing such as sentence structure, paragraph structure, transitional sentences, use of personal voice, researching, outlining, thesis statements, and any other needed writing components. Done through: multiple-choice, fill-in-the-blank, and short in-class writing.
    2. Understanding of Different Essay Types: instruction and assessment on key aspects of different essay types done through multiple-choice, fill-in-the-blank, and short in-class writing
    3. Prompt Engineering Development: instruction and assessment on prompt engineering using an advanced prompt formula, the ability to create effective prompts for AI to generate good essays that have proper formatting, student voice, and accurate information. Evaluated via multiple-choice, fill-in-the-blank tests, and in-class writing of prompts and additional drafting.
    4. Use of AI for Writing with Full Personal Accountability: assessment on specific essay creation done via student submission of essays developed through the use and assistance of AI. Additional in-class exams on key contents and periodic student presentations on created essays (to help ensure student accountability of knowledge integration).

    Key Benefits:

    • Develops students’ foundational knowledge of writing and ability to create multiple essay types
    • Eliminates issues with students inappropriately using AI to write essays without fully understanding writing components
    • Reduces instructors’ stress/anxiety in feeling the need to run AI detection tools (no longer needed)
    • Helps to directly develop students’ understanding of effective writing while simultaneously developing their critical thinking, AI literacy, and ethical AI use skills

    A much more detailed description of this method is available through the Sovorel Center for Teaching & Learning YouTube educational Channel:

    For an even more detailed informational article on The Anders 4 Phase AI Method of Writing Instruction, you can go here: https://brentaanders.medium.com/the-new-way-to-teach-writing-1e3b9a14ef64

    Source link

  • How universities can use artificial intelligence to regain social license – Campus Review

    How universities can use artificial intelligence to regain social license – Campus Review

    Universities will need to prove to future students why university degrees are worth it in an artificial intelligence (AI) knowledge economy, speakers at Sydney’s latest generative AI meeting said.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • My robot university counsellor – The PIE News

    My robot university counsellor – The PIE News

    The PIE’s Director of Research and Insight, Nicholas Cuthbert tests the limits of virtual counsellor software!

    Can he tell the difference between a human and a machine? The video shows how AI is revolutionising recruitment, with powerful WhatsApp integrations and offer-letter capabilities making lead conversion faster and smoother than ever.

    Source link

  • How can evolving student attitudes inform institutional Gen-AI initiatives?

    How can evolving student attitudes inform institutional Gen-AI initiatives?

    This HEPI blog was authored by Isabelle Bristow, Managing Director UK and Europe at Studiosity.

    In a HEPI blog published almost a year ago, Student Voices on AI: Navigating Expectations and Opportunities, I reported the findings of global research Studiosity commissioned with YouGov on students’ attitudes towards artificial intelligence (AI). The intervening year would be considered a relatively small time period in a more regular higher education setting. However, given the rapid pace of change within the Gen-AI sphere, this one year is practically aeons.

    We have recently commissioned a further YouGov survey to explore the motivations, emotions, and needs of over 2,200 students from 151 universities in the UK.

    Below, I will cover the top five takeaways from this new round of research, but first, which students are using AI?

    • 64% of all students have used AI tools to help with assignments or study tasks.
    • International student use (87%) is a staggering 27% higher than their domestic student counterparts (60%).
    • There’s a 21% difference between students who identify as female who said they have never used AI tools for study tasks (42%) compared with those identifying as male (21%).
    • Only 17% of students studying business said they have never used it, compared with 46% studying Humanities and Social Sciences.
    • The highest reported use is by students studying in London at 78%, and conversely, the highest non-use was reported by students studying in Scotland at 44%.

    The Top Five Takeaways:

    1. There is an 11% increase from last year in students thinking that their university is adapting fast enough to provide AI study support tools.

    Following a year of global Gen-AI development and another year for institutions to adapt, students who believe their university is adjusting quickly enough remain in the minority this year at 47%, up from 36% in 2024. The remaining 53% of student respondents believe their institution has more to do.

    When asked if they expect their university to offer AI support tools to students, the result is the same as last year – with 39% of students answering yes to this question. This was significantly higher for male students at 51% (up by 3% from last year) and for international students 61% (up by 4% from last year). Once again, this year, business students have the highest expectations at 58% (just 1% higher than last year). Following this, medicine (53%), nursing (48%) and STEM (46%) were more likely to respond ‘Yes’ when asked if they expect their university to provide AI tools.

    1. Some students have concerns over academic integrity.

    When asked if they felt their university should provide AI tools, students who answered’ no’ were given a free text box to explain their reasoning. Most of these responses related to academic integrity.

    ‘I don’t think unis support its use because it helps students plagiarise and cheat.’

    ‘I think AI beats the whole idea of a degree, but it can be used for grammar correction and general fluidity.’

    ‘Because it would be unfair and result in the student not really learning or thinking for themselves.’

    Only 7% of students said they would use an AI tool for help with plagiarism or referencing (‘Ask my lecturer’ was at 30% and ‘Use a 24/7 university online writing feedback tool’ was at 21%).

    1. Students who use AI regularly are less likely to rank ‘fear of failing’ as one of their top three study stresses

    We asked all students – regardless of their AI use – of their top three reasons for feeling stressed about studying the responses were as follows:

    • 61% of all UK students included ‘fear of failing’ in their top 3 reasons for feeling stressed about studying;
    • 52% of all students included ‘balancing other commitments’; and
    • 41% of all students included ‘preparing for exams and assessments’.

    These statistics change when we filter by students who use AI tools to help with assignments or study tasks. Fear of failing is still the highest-ranked study stress. The percentage of respondents who rank fear of failing in their top three study stresses by AI use are as follows:

    • 69% for those who never use AI;
    • 62% for those who have used AI once or twice;
    • 58% for those who have used AI a few times and;
    • 50% for those who use AI regularly.

    Looking at the main reasons students want to use the university’s AI service for support or feedback, this year, ‘confidence’ (25%) overtook ‘speed’ (16%). Female respondents, in particular, are using AI for reasons relating to confidence at 29%, compared to 20% for male students. International students valued ‘skills’ the most at 20%, significantly higher than their domestic student counterparts at 11%.

    1. Students who feel like they belong are more likely to use AI.

    We examined the correlation between students’ sense of belonging in their university community, and the amount they use AI tools to help with assignments or study tasks.

    For students who feel like they belong, 67% said they have used AI tools to help with assignments or study tasks; this compares with 47% for students who do not feel like they belong.      

    5. Cognitive offloading (using technology to circumvent the ‘learning element’ of a task) is a top concern of academics and institutional leadership in 2025. However, student responses suggest they feel they are both learning and improving their skills when using generative tools.

    When asked if they were confident they are learning as well as improving their own skills when using generative tools, students responded as follows:

    • 12% ‘were extremely confident that they were learning and developing skills;
    • 31% were very confident;
    • 29% were moderately confident;
    • 26% were moderately confident; and
    • Only 5% were not at all confident that this was true.

    Conclusion:

    Reflecting on the three years since Gen-AI’s disruptive entrance into the mainstream, the sector has now come to terms with the power, potential, and risks of Gen-AI. There is also a significantly better understanding of the importance of ensuring these tools enhance student learning rather than undermining it by offloading cognitive effort.

    Leaders can look to a holistic approach to university-approved, trusted Gen-AI support, to improve student outcomes, experience and wellbeing.

    You can download the full Annual Global Student Wellbeing Survey – UK report here.

    Studiosity is a HEPI Partner. Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven 4.4x ROI, while ensuring integrity and reducing institutional risk. Studiosity delivers ethical and formative feedback at scale to over 250 institutions worldwide. With unique AI-for-Learning technology, all students can benefit from formative feedback in minutes. From their first draft to just before submission, students receive personalised feedback – including guidance on how they can demonstrably improve their own work and critical thinking skills. Actionable insight is accessible to faculty and leaders, revealing the scale of engagement with support, cohorts requiring intervention, and measurable learning progress.

    Source link