Category: AI

  • What Does AI Readiness Mean for Schools? – The 74

    What Does AI Readiness Mean for Schools? – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Spotify.

    Michael and Diane sit down with Alex Kotran, founder and CEO of the AI Education Project (AIEDU), to dive into what true “AI readiness” means for today’s students, educators and schools. They explore the difference between basic AI literacy and the broader, more dynamic goal of preparing young people to thrive in a world fundamentally changed by technology. The conversation ranged from the challenges schools face in adapting assessments and teaching practices for the age of AI, to the uncertainties surrounding the future of work. The episode asks key questions about the role of education, the need for adaptable skills, and how we can collectively steer the education system toward a future where all students can benefit from the rise of AI.

    Listen to the episode below. A full transcript follows.

    *Correction: At 17:40, Michael attributes an idea to Andy Rotherham, The idea should have been attributed to Andy Smarick.

    Diane Tavenner: Hey, Michael.

    Michael Horn: Hey, Diane. It is good to see you as always. Looking forward to this conversation today.

    AI Education and Literacy Insights

    Diane Tavenner: Me, too. You know what I’m noticing, first of all, I’m loving that we’re doing a whole season on AI because I felt like the short one was really crowded. And now we get to be very expansive in our exploration, which is fun. And that means we’ve opened ourselves up. And so there’s so much going on behind the scenes of us constantly pinging each other and reading things and sending things and trying to make sense of all the noise. And just this morning, you opened it up super big. And so it works out perfectly with our guest today. So I’m very excited to be here.

    Michael Horn: No, I think that’s right. And we’re having similar feelings as we go through the series. And I’m, I’m really excited for today’s guest and because I think, you know, there are a lot of headlines right now around executive actions with regards to AI or, you know, different countries making quote, unquote, bold moves, whether it’s South Korea or Singapore or China and how much they’re using AI in education or not. We’re going to learn a lot more today, I suspect, from our guest, and he’s going to help put it all in the context, hopefully, because we’ve got Alex Kotran, excuse me, joining us. He’s the founder and CEO of the AI Education Project, or AIEDU. And AIEDU is a nonprofit that is designed to make sure that every single student, not just a select few, understands and can benefit from the rise of artificial intelligence. Alex is working to build a national movement to bring AI literacy and readiness into K12 classrooms, help educators and students explore what AI means for their lives, their work, and their futures.

    And so with all that, I’m really excited because, as I said, I think he’s going to shed a little bit of light on these topics for us today. I’m sure we’re only going to get to scratch the surface with him because he knows so much, but he’s really got his pulse on the currents at play with AI and education, and perhaps he can help us separate some of the hype from reality, or at least the very real questions that we ought to be asking. So, Alex, with all that said, no pressure, but welcome. We’re excited to have you.

    Alex Kotran: I’ll do my best.

    Michael Horn: Sounds good. Well, let’s start maybe just your personal story right into this work and what motivates you around this topic in particular, to spend your time on it.

    Alex Kotran: I’ve been in the AI space for about 10 years. But you know, besides being sort of proximate to all these conversations about AI, you know, I don’t have a background in software, computer science. I don’t think I have ever written a line of code. I mean, my dad was a software engineer. He teaches CS now. No background in technology or CS, no background in education. And so I actually, I had funders ask me this when I first launched AIEDU like, well like, why are you here? Like, what’s, what’s your role in all of this? You know, my background is in really political organizing. I started my career working on a presidential campaign, went and worked for the White House for the Obama administration, doing outreach for the Affordable Care act and other stuff like Ebola and Medicare and, and then found myself in D.C.

    and after I just kind of got burned out of politics for reasons people probably don’t need to hear and can completely understand. And so it wasn’t that I was so smart to like, oh, I knew AI was the next thing. I just was like, I really want to move to San Francisco. I visited there, visited the city like twice and just fell in love and sort of fell into tech and an AI company that was working in cleantech. And so I was sort of doing AI work before it was really cool. It was like back in 2015, 2016. And then I ended up getting like what at the time was a kind of a really random job that I had a lot of mentors who were like, I don’t know, Alex, like AI, like this is just like a fringe, you know, emerging technology kind of like, you know, 3D printing and VR and XR and the Metaverse, you know, is that really like what you should do? And I just had like, nah, I just want to learn.

    It seems really interesting. And that’s why I joined this AI company essentially working for the family office for the CEO. It was like, sort of a hybrid family office, corporate job, doing CSR, corporate social responsibility in the legal sector. This is the first company to build AI tools for use in the law. And so I was sort of charged with how do we advance the governance of AI and sort of like the safe and ethical use of AI and the rule of law. And so I basically had a blank canvas and ended up building the world’s first AI literacy program for judges. I worked with the National Judicial College in Stanford and NYU Law, trained thousands of judges around the world in partnership, by the way, with non profits like the Future Society and organizations like UNESCO. And because my parents are educators, I, you know, and my parents are foreign immigrants as well.

    And so they always ask me about my job and really trying to convince me to go back, to go to law school or get a PhD or something. And I was like, well, no, but, you know, I actually, I’m, I don’t need to go to law school. I’m actually training judges. Like, they’re, they’re coming to learn from me about this thing called AI. And my mom was like, oh, like, well, that sounds so interesting. You know, have you thought about coming, you should come to my school and teach my kids about AI. And she teaches high school math in Akron, Ohio. And I was just like, surely your kids are learning about AI.

    That’s, you know, my assumption is that we’re at a minimum talking to the future workers about the future of work. I just assume that, you know, like, you know, judges who tend to be older, like, they kind of need to be caught up. And after I started looking around to see, like, is there other curriculum that I could share with my mom’s school, I found that there really wasn’t anything. And that was back in 2019. 2018/2019. So way before ChatGPT and thus AIEDU was born when I realized, OK, this doesn’t exist. This actually seems like a really big problem because even as, even as early as 2018, frankly, as early as 2013, people in the know, technologists, people in Silicon Valley, labor economists, were sounding the alarms, like, AI is, you know, automation is going to replace like tens of millions of jobs.

    This is going to be one of the huge disruptors. You had the World Economic Forum talking about the fourth Industrial Revolution. Really, this wasn’t much of a secret. It was just, you know, like, esoteric and like, you know, in the realm of like certain nerdy wonky circles. And it just, there wasn’t a bridge between those, the people that were meeting at the AI conferences and the people in education. And I would really say, like, our work now is still anchored in this question of, like, how do you make sure that there is a bridge between the cutting edge of technology and the leadership and decision makers who are trying to chart a course not over the next two years, which is sort of like how a lot of, I think Silicon Valley is thinking in the sort of like, very immediate reward system where they’re just, you know, like, they’re, they’re looking at the next fundraise. But in education, you’re thinking about the next 10 years. These are huge tanker ships that we’re trying to navigate now and we’re entering.

    I think this is such a trope, but, like, we are really entering uncharted waters. And so, like, steering that. That supertanker is hard and I suppose to really belabor it as maybe AIEDU is sort of like the nimble tugboat, you know, that’s trying to just sort of like, nudge everybody along and sort of like guide folks into the future. And that demands answering some of this core question of the future of work, which hopefully we’ll get some more time to talk about.

    Michael Horn: Yeah, I want to, I want to move there in a moment, but I, but first, like, I maybe I don’t know that all of our audience will be caught up with all the, you know, sort of this macro environment right where. Where we sit right now in terms of the national policy, executive actions as it pertains to AI and education. They’ve probably heard about it, but don’t know what it actually means, if anything. And so maybe sort of set the scene around where we are today nationally on these actions? What if it is actually meaningful or impactful? What if it is maybe more lip service around the necessity of having the conversation rather than moving the ball, just sort of set the stage for us where we are right now.

    Alex Kotran: It’s really hard to say. I mean, there’s been a lot of action at the federal level and at state levels and schools have implemented AI strategies. The education space is inundated with, like, discussion and initiatives at working groups and bills and, you know, like, pushes for, like, AI and education. I think the challenge now is, like, we really haven’t agreed on, like, to what end? Like, is this, you know, are we talking about using AI to advance education as a tool? So, like, can AI allow us to personalize learning and address learning gaps and help teachers save time, or are we talking about the future of work and how do we make sure kids are ready to thrive? And there are some that say, well, they. We just need to get them really good at using tools. Which is a conversation I literally had earlier today where there was like a college to career nonprofit and they were like, well, we’re trying to figure out what tools that help kids learn because we want them to be able to get jobs.

    I think like AIEDU, like, our work is actually, we don’t build tools. We don’t even have a software engineer on our team, which we’re trying to fix, like, if there’s a funder out there that would like to help fund an engineer, we’d love to have one. But our work is really systems change. Because if you like, zoom out and like, this is, I think, where I do have this skill set. And it’s kind of like, again, it’s a bit niche.

    The education system is not. It’s not one thing. It’s like, it’s sort of like an organism. The same way that like redwood trees are organisms. Like, they’re kind of all connected, the root structure. But it’s actually like you’re looking at a forest that looks very different, you know, that’s not centralized. You know, every state kind of has their own strategy. And frankly, every district, in many cases, you’re talking about, you know, in some cases, like government scale, procurement, discussion, bureaucracy involved.

    Advancing AI Readiness in Education

    Alex Kotran: So if you’re trying to do systems change, this is really a project of like, how do you move a really heterogeneous group of humans and different audiences and stakeholders with different motivations and different priorities? And so our work is all about, OK, like, setting a North Star for everybody, which is like defining where we’re actually trying to go, what. And we use the word AI readiness, not AI literacy. Because what we’re, what we care about is kind of irrespective of whether kids are really good at using AI. Like, are they thriving in the world? And then like, how do you get there? Like, like most of our budget goes to delivering that work, you know, doing actual services, where we’re building the human, basically building the human capital and like, the content. So like training teachers, building curriculum, adapting existing curriculum, more so than building new curriculum, but like integrating learning experiences into core subjects that build the skills that students are going to need. And those skills, by the way, are not just AI literacy, but durable skills like problem solving, communication, and core content knowledge frankly, like being able to read and write and do math, we think is actually really important still, if not more important. And then sort of the third pillar to our work is really catalyzing the ecosystem.

    And because the only way to do this is by building a movement, right? Like, sure, there. There’s an opportunity for someone to build a successful nonprofit that’s delivering services today. But if you actually want to change the world and really solve this problem on the timescale required, you have to somehow rally the entire, there’s like a million K12 nonprofits. We need all of them. This is like an all hands on deck moment. And so our organization is really obsessed with, like, how do we stay small and almost like operate as the intel inside to empower, like, the existing nonprofits so that they don’t have to all pivot and, like, become AI because, like, there’s just not enough AI experts to go around. If every school and every nonprofit wanted to hire an AI transformation officer.

    Like, there just wouldn’t be enough people for them to hire.

    Diane Tavenner: Yeah, they’re still trying to hire a good tech lead in schools. We’re definitely not getting an AI expert in every school soon. So you’re, you’re speaking my language, you know, sort of change management, vision, leadership 101, etc. I’m wondering, you know, sort of not necessarily the place we were thinking we’d go in this conversation, but I think it’d be fun to go, like, really deep for a moment that I think is related to your North Star comment. What does school look like in the age of AI? When kids are flourishing, when young people are flourishing, and when they’re successfully launching? I think that’s what the North Star has to describe.

    And you just started naming a whole bunch of things that are still important in school, which feel very familiar to me. They’re all parts of the schools that I’ve built and designed and whatnot. And so I think one of the interesting things is maybe we’ll then build back up to policy and whatnot. But, like, what does it look like if we succeed, if there is this national movement, we’re successful. We have schools or whatever they are that are enabling young people to flourish. What do you think that that looks like?

    Alex Kotran: Yeah, this is the question of our day. Right. I mean, I think this is where, I mean, just to go back to this, like, state of play. I think, like, we’re kind of. It’s very clear that we are in the age of AI, right? This is no longer some future state. And frankly, like, ignore all the talk about AI bubbles because it kind of doesn’t matter. I mean, there was, there was like, there’s always a bubble. There was a bubble when we had railroads.

    There was a bubble when we had, like, in the oil boom. There was a bubble with the Internet. You know, there probably will be some kind of a bubble with AI, but that’s kind of like part and parcel with transformational technologies. Nobody who’s really spent time digging these technologies believes that there’s not going to be AI sort of totally proliferated throughout our work in society in like, 10 years, which is, again, the timeframe that we’re thinking about. The key question is, though, like, what is it? Like, what does it mean to thrive? And so there’s more than just getting a job. But I think most people would admit that, like, having a job is really important. So maybe we start there and we can also talk about, you know, the, the social, emotional components of just sort of like, being able, being resilient to some of like, the onslaught of synthetic media and like, AI companions as other stuff. One of, if not the most important thing is, like, how do you get a job and like, have like, you know, be able to support yourself and, and that question is really unanswered right now.

    Uncertainty in AI and Future Jobs

    Alex Kotran: And so everybody in the education system is trying to figure out, like, well, what is our strategy? But we don’t know where we’re going? Like, we really do not know what the jobs of the future are. And like, I’ve, like, you hear platitudes like, well, it’s not that AI is going to take your job, it’s that somebody using AI is going to take your job. Which is a kind of a dumb thing to say because it’s, it’s correct. I mean, it’s like, it’s like, basically like, okay, either AI is going to do all the jobs, which I don’t like, like, that actually may happen, some people say, sooner than later. I just assume it’s going to be a long, long time if it ever, if we ever get there. And so until we get there, that means that there are humans doing jobs and AI and technology doing other aspects of work. So, like, what are the humans doing is really the important question. Not just like, are they using AI? But like, how are they using AI? How aren’t they using AI? Until we get more fidelity about what the future of work looks like, what are the skills you should be teaching? Because, like, you know, like, I think a lot about, like, cell phones.

    And you go back to 2005 and you can imagine a conversation where it’s like, and all this is completely true, right? In 2005, it would be correct to say that, you know, you will not be able to get a job if you don’t know how to use a cell phone. You will be using a cell phone every single day, whether you’re a plumber or a mathematician or an engineer or an astrophysicist. And yet I think most of us would agree that, like, we shouldn’t have, like, totally pivoted education to focus on, like, cell phone literacy because, like, nobody’s going to hire you because you know how to use a phone and AI like, probably is going to some degree get there. I mean, it’s already sort of there, right? Like, sure, there are people who will charge you money to teach you prompt engineering, but you could also just open up Gemini and say, help me write a prompt. Here’s what I want to do. And it will basically tell you how to do it.

    Diane Tavenner: I mean, we. You’ve seen this. You might not be old enough to remember this, but I was a teacher when everyone thought it was a really good idea to teach keyboarding in school. It’s like a class. What we discovered is actually if you just have people using technology, they learn how to use the keyboard. Right? Like, it happens in the natural course of things and you don’t have a class for it. So what I hear you saying is like, your approach is not about this sort of, you know, there’s some finite set of information or skill, you know, not even skills in many ways that we’re going to teach kids. But it’s like, what does it look like to have them ready for the world that honestly is here to today and then keeps evolving and changing over the next 10 years? And so where to even go with that, Michael because.

    Michael Horn: I mean, part of me wonders, Alex, like, if I start to name the things that remain relevant, what, like, maybe the conversation to have is like, what’s less relevant in your view, based on what the world of work and society is going to look like?

    What’s the stuff that we do today that you know, will feel quaint? Right, that we should be pruning from?

    Diane Tavenner: Yeah, cursive handwriting. That is still hotly debated by, by the way.

    Alex Kotran: But, you know, although you get like Deerfield Prep and they’re going back to pen and paper.

    Michael Horn: Right. So that, I mean, that’s kind of where I’m curious. Like, what practices would you lean into? What would you pull away from? Because, I mean, that’s part of the debate as well. Like our friend Andy Rotherham, I believe at the time we’re recording it, just had a post around how it’s time for a, you know, a pause on AI in all schools. Right. Not sure that’s possible for a variety of reasons. But, like, what would you pull back on? What would you lean into? What would you stop doing that’s in schools today, as you think about that readiness for the world that will be here in your, we’re all guessing, but 10 years from now.

    Alex Kotran: Now, what to pull back on? I mean, look, take home essays are dead. Don’t assign take-home essays like the detectors are imperfect. It’s like, and as a teacher, do you really want to be like an, you know, a cyber forensics specialist? Like that’s not the right use of your time. And also you’re using AI. So it’s a bit weird to the dissonance of like, oh, like empowering teachers with AI, but then like, we need to prevent kids from using it. But I think they’re like low hanging fruit. Like, OK, don’t assign take-home essays.

    The way to abstract, that is students are. You can call it cheating, let’s just call it shortcuts. What we do need to do is figure out, OK, how can AI, how is AI being used as a shortcut? And whether you ban it in schools, kids are going to use it out of school. And so teachers need to figure out how to create assessments and homework and projects that design such that you can’t just use AI as a shortcut. And there’s like, this is a whole separate conversation. But just like to give one example, having students demonstrate learning by coming into the class and presenting and importantly having to answer questions in real time about a topic. You can use all the AI you want, but if you’re going to be on the spot and you don’t understand whatever the thing is that you’re presenting about and you’re being asked questions like, you know, that’s the kind of thing where sure, use all the AI. If it’s helpful, you might just.

    But ultimately you just need to learn the thing. But like the more important question is like, I don’t know if school changes as much as people might think. I think it does change. I think there’s a lot that we know needs to change that is kind of irrespective of AI. Like we need learning to be more engaging. We need more project based learning. We need to shift away from just sort of like pure content knowledge, memorization. But that’s not necessarily new or novel because of AI.

    I think it is more urgent than ever before.

    Michael Horn: I’m curious, like what’s. Because I do think this is also hotly debated, right? Like in terms of the role of knowledge and being able to develop skills and things of that nature. And so I’m just sort of curious, like what’s the thin layer of knowledge you think we need to have? Or, or like Steven Pinker’s phrase, common knowledge Right

    And what’s the stuff we don’t have? Like we don’t have to memorize state capitals, right? Maybe.

    Diane Tavenner: No. Yeah, I don’t think we need to memorize the state capital, because, yeah, but keep going.

    Michael Horn: Yeah, yeah, I’m curious now. It’s like, right, like as we think about, because we do have this powerful assistant serving us now and we think about what that means for work. And I, but I guess I’m just curious, like, what does that really mean in terms of that balance, right? Like, what is all knowledge learned through the project or this, you know, how do we think about, you know, and it’s a lot of just in time learning perhaps, which is more motivating. I’m curious, like, how you think about that.

    Alex Kotran: I think this needs to be like, backed by, like research, right? Like, sure, it probably is, right, that you don’t need to memorize all the state capitals. But then I think you, you start to get to a place where like, OK, well, but do you even need to learn math? Because AI is really good at math and I think math is actually a good analog because I don’t really use math very much or I use relatively simplistic math day to day. I, I think it was really valuable for me to like, have spent the time building computational thinking skills and logic. And also just math was really hard for me and it was challenging. And like the process of learning a new abstract, hard thing. I do use that skill, even some of the rote memorization stuff. You know, my brother went to med school and like they spent a lot of time just memorizing like completely just like every tiny aspect of the human body.

    They like have to learn it. It’s actually like, I think doctors are really interesting, a great way to kind of double click on this because if doctors don’t go through all of that and don’t understand the body and go through all of the rote process of literally taking like thousand question tests where they have to know like random things about blood vessels. And even if they’re never going to deal with that specific aspect of the human body, doctors kind of like build this sort of like generalized set of knowledge and then also they spend all this time like interacting with real world cases. And you, you start to build instincts based on that and, and you talk to hospitals about like, oh, what about, you know, AI to help with diagnosis? And one of the things I hear a lot of is, well, we’re worried about doctors losing the capacity to be a check on the AI because ultimately we hear a lot about the human in the loop. The human in the loop is only relevant if they understand the thing that they’re looped into. So, yeah, so like, I don’t know, I mean, maybe we.

    Diane Tavenner: Yeah, you’re onto something. You’re spurring something for me that I, I actually think is the new thing to do and haven’t been doing and aren’t talking about. And that is this, let me see if I can describe it as I’m understanding it, unfold the way you’re talking about it. So I had a reaction to the idea of memorizing the state capitals because memorizing them is pretty old school, right? It calls back to a time where you aren’t going to be able to go get your encyclopedia off the shelf and look up the capitals. Like you have to have that working knowledge in your mind, if you will, to have any sense of geography and, you know, whatever you might be doing. And it was pretty binary.

    Like it really wasn’t easy to access knowledge like that. So you really did have to like memorize these things. Math, multiplication tables get cited often and whatnot for fluency in thinking and whatnot. So I don’t think that goes away. But it’s different because we have such easy access to AI and so there isn’t this like dependency on, you’re the only source of that knowledge, otherwise you’re not going to be able to go get it. But it doesn’t take away the need to have that working understanding of the world and so many things in order to do the heavier lifting thinking that we’re talking about and the big skills. And I think that, I don’t think there’s a lot of research on that in between pieces, like, how do you teach for that level of knowledge acquisition and internalization and whatnot? And how do you then have a, you know, a more seamless integration with the use of that knowledge in the age of AI when it’s so easily accessible? So that feels like a really interesting frontier to me. That doesn’t look exactly the same as what we’ve been doing, but isn’t totally in a different world either.

    It is restricted, responsive and reflective of the technology we have and how it will get used now.

    Rethinking Assessments and Learning Strategies

    Alex Kotran: Yeah, it’s, it’s a helpful push because like, what I’m not saying is that I know everything in school is fine. I don’t think I’ve ever talked to a superintendent who would say, oh, I’m feeling good about our assessment strategy. Like, we’ve known that and because really what you’re describing is assessments like what, like what are we assessing in terms of knowledge, which becomes the driver and incentive structure for teachers to like, you know, because to your point. Are you spending five weeks just memorizing capitals or are you spending two weeks and then also then saying, OK, now that you’ve learned that, I want you to actually apply that knowledge and like come up with a political campaign for governor of, you know, a state that you learned about and like, tell us about like why you’re going to be picking those. You know, tell us about your campaign platform. Right. And you know, like, how is it connected to what you learned about the geography of that state? So it’s like adapting, integrating project based learning and more engaging and relevant learning experiences. And then like the mix and the balance of what, what’s happening in the classroom is sort of, and this is the, the challenging thing because it’s like the assessments will inform that, but it’s also there the assessments are downstream of sort of like it’s not just about getting the assessments right, but it’s like, why are we assessing these things? And so that you very quickly get to like, well like, what is the future of work? And because like, yeah, I mean like, you probably don’t need to learn the Dewey Decimal system anymore.

    Even though being able to navigate knowledge is maybe one of the most important things, certainly something I use every day.

    Diane Tavenner: One of the things we tend to do in US Education, Alex, is be so US centric and we forget that other people on the planet might be grappling with some of these things. I know you track a lot of what happens around the globe. What can we look at as models or interesting, you know, experiments or explorations. Everything from like big system change work, which I know we have different systems across the world, so that’s different. It’s a little bit, it’s not groundswell, it’s a top down but like anything from policy, big system all the way down to like who, who might be doing interesting things in the classroom. Where are you looking for inspiration or models across the globe?

    Alex Kotran: I mean, South Korea is a really interesting case study. You mentioned South Korea. I think at the beginning of this, during the intro they were just in headlines because they had done this big push. They would like roll out personalized learning nationwide. And then they announced that they were rolling back or sort of slowing down or pausing on the strategy. I forget if it was a rollback or a pause, but they’re basically like, wait, this isn’t working. And what they found is that they hadn’t made a requisite investment in the teacher capacity. And that was clear.

    And so part of the reason I’m tracking that is because I don’t know that there’s very much for us to learn from what any school is doing right now, beyond, like, there’s a lot for us to learn in the sense of like, how can we empower teacher, like, how do we empower teachers to run with this stuff? Because they are doing that. You know, like, I think there’s a lot to learn from a, like a mechanical standpoint of like, implementation strategies. But I don’t know that anybody has figured this out because like, nobody can yet describe what the future of work looks like. And I know this because the AI companies can’t even describe what the future of work looks like. You know, you had like Dario Amodei at Anthropic seven months ago, saying in six months, 90% of code is going to be written by AI, which is not the case. Not even close.

    Diane Tavenner: And Amazon’s going to lay off 30,000 white collar workers this week,

    Alex Kotran: Which they did.. Yes. And so you have. But is that really because of AI or is that because of overhiring from interest rates? I mean there’s like, so, so until we answer this question of like, what is like. And really the way to say what is the future of work is like, to put it in educational terms, how are you going to add value to the labor market? Like, David Otter has this like, example which I think is really important. It’s like, you know, the crosswalk coordinator versus the air traffic controller. And then, like, we pay the air traffic controller four times as much because any one of us could go, be a crosswalk coordinator like today, just give us a vest and a stop sign. I don’t, I assume you’re not moonlighting as an air traffic controller. I’m certainly not.

    It would take us, I think, I don’t know what the process is, but I think years to acquire the expertise. And so there is this barrier of expertise to do certain things. And what AI will do is lower the barriers to entry for certain types of expertise, things like writing, things like math. And so in those environments where AI is increasingly going to be automating certain types of expertise, then, well, for people to still get wages that are good or to be employed, they have to be adding something additional. And so the question of like, what are the humans adding? Again, we get to stuff like durable skills. We get to stuff like a human in the loop. But I think it’s much more nuanced than that. And the reason I know that is because there’s the MIT study.

    I think it was a survey, but let’s call it a study. I think they called it a study. So there’s a study from MIT that found that 95% of businesses, AI implementations failed, have not been successful. So really what we’re seeing is, yes, AI is blowing up, but for the most part, most organizations have not actually cracked the code on like, how to like, unlock productivity and like. And so I think that there’s actually quite a lot of business change management and organizational change that’s coming. And so actually kind of trying to hone in on what does that look like, I think is maybe the key, because that will take 10 years if you look at computers. Computers, like, could have revolutionized businesses long before, but they ended up getting adopted. I mean, it took like decades actually for, you know, spreadsheets and things like that to become ubiquitous.

    And like Excel is a great example of something. I was just talking to this, this expert from the mobile industry who was talking about, like, the interesting thing about spreadsheets was it didn’t just automate because there were people who literally would hand write, you know, ledgers before Excel. And so obviously that work got automated. But the other thing that spreadsheets did, where they created a new category of work, which is like the business analysts, because. Because before spreadsheets there was really the only way to get that information was to like, call somebody and sort of like compile it manually. And now you had a new way to look at information which actually unlocked a new sort of function that didn’t exist. And that meant, like, businesses now have teams of people that are like, doing layers of analysis that they didn’t realize that they could do before. And so

    Diane Tavenner: I wonder, what you’re saying is sparking two things for me. And again, we could talk probably all day, but we don’t have all day. So sadly, I think this might be bringing us to a close here for the moment. But I’m curious what both of you think on this because you brought up air traffic controllers. And in my new life and work, I’m very obsessed with careers and how people get into them and whatnot. I’ve done deep dives on air traffic controllers. And it’s, my macro point here is going to be.

    I do wonder if this moment of AI is also just extreme, exposing existing challenges and problems and bringing them to the forefront. Because let me be clear, training air traffic controllers in the US was a massive problem before AI came around, before any of this happened. It’s a really messed up system. It is so constrained. It’s not set up for success. Like, it’s just such a disaster and a mess and it’s such a critical role that we have. And it’s probably going to change with AI. Like, so you’ve just got all these things going on.

    And I’m wondering, Michael, from your perspective, is that what happens in these, you know, moments of disruption and is that all predictable and how do we get out of it? And then, Alex, you’re talking about. I was having a conversation this morning about this idea that all these companies no longer are hiring sort of those entry level analysts, or they’re hiring far fewer of them. And my wondering is no one can seem to answer this question yet. Great. Where’s your manager coming from? Because if you don’t employ any people at that level and they haven’t sort of learned the business and learned things, what do you think they’re just sitting on the sidelines for seven, eight years and then they’re ready to slide in there into, you know, the roles that you are keeping? And so are these just problems that already existed that are now just being exposed, you know, what’s going on? What do you all think?

    Job Market Trends and AI

    Alex Kotran: So, first of all, we really don’t know if the, like, I’m not convinced that the reason that there’s high unemployment among college grads is because of AI. I mean, I think there was overhiring because of interest, low interest rates. I think that companies are trying to free up cash flow to pay for the inference costs of these tools. And, and I think in general, like, you know, we’re, there’s going to be like, sort of like boom, bust cycles in terms of hiring in general. And we’ve been in a really good period of high employment for a long time. I think what, what is clear is if you talk to like earlier stage companies, you know, I was talking to a friend of mine at Cursor, which is like one of the big vibe coding companies, like blowing up, worth lots and lots of money. And I asked them about, like, oh, like I keep hearing about like, you know, companies aren’t hiring entry level engineers anymore because like, you’re better off having someone with experience.

    And he’s like, all of our engineers are in like their early 20s. Huh. OK, that’s interesting. Well, yeah, because actually it’s a lot faster and easier to train somebody who’s an AI native who learned software engineering while vibe coding. But he’s like, but we’re a small organization that’s like basically building out our structure as we go so we don’t have to like operate within sort of like the confines. I think there’s going to be this idea of like incumbent organizations. They have the existing hierarchy because ultimately you’re looking for people who are like really fast learners who can like learn new technology, who are adaptable and who are good at like doing hard stuff. If you’re a small organization, you’re probably better off just like hiring young people that like, you know, have those instincts.

    If you’re a large organization, what you might do is just maybe you’re laying off some of the really slow movers and then retaining and promoting the people that are already in place and have those characteristics. And then your point about like training the next generation, like law firms are thinking about this a lot because like you could, maybe you could automate all the entry level associates, but you do need a pipeline. But then you get to do you need middle managers? I mean like if the business models are less hierarchical because you just don’t need all those layers, then maybe you don’t worry so much about whether you need middle management and it’s more about do you need more. I think what companies are going to realize is they actually need more systems thinkers and technology native employees that are integrated into other verticals of knowledge work that outside of tech. So like, if you think about marketing and like business and customer success and you know, like non profit world fundraising and policy analysts, like all of these teams that generally have like people from the humanities. You know, I think companies are going to say, OK, how do we actually get people that like can do some vibe coding and have a little bit of like CS chops to build out some, you know, much more efficient and productive ways for these teams to operate. But like nobody knows. Nobody knows.

    I don’t know. Michael?

    Michael Horn: I love this point, Alex, where you’re ending and that like, and I like the humility frankly in a lot of the guests that we’ve had around. This is like the honesty that we’re all guessing a little bit at this future and we’re looking at different signals right. As we do. I think my quick take off this and I’ll try to give my version of it, I guess is you mentioned David Otter earlier at mit, Alex. Right. And part of his contention is that actually, right, it levels expertise between jobs that we’ve paid a lot for and jobs that we haven’t and more people like, as opposed to technology that is increasing inequality. This may be a technology that actually decreases inequality. And I guess it goes to my second thing, Diane, around what the question you asked and air traffic control training is a great example.

    But like, fundamentally, the organizations and processes we have in place have a very scarcity mindset. And I suspect they’re going to fight change and we’re going to need new disruptive organizations, similar to what Alex was just saying, that look very differently to come in. And it gets to a little bit of, I think what everyone says with technology, like the short term predictions are huge. They tend to disappoint on that. The long term change is bigger than we can imagine. And I guess I kind of wonder is the long term change what we. Alex, earlier on this season we had Reed Hastings and you know, he has a very abundant sort of society mindset where the robots plus AI plus probably quantum computing, like, are doing a lot of the things, or is it frankly sort of what you or I think Paul LeBlanc would argue, which is that a lot of these things that require trust and we want people like, yes, you can build an AI that does fundraising for you. But like, do I really trust both sides of that equation? I’d rather interact with someone.

    Right. There’s a lot of social capital that sort of greases these wheels ultimately in society. And I guess that’s a bit of the question. And Diane, I guess part of me thinks, you know, Carlota Perez, who’s written about technology revolutions, right. She says that there will be some very uncomfortable parts of this, right. And a bit of upheaval. Part of me keeps wondering if we can grease the wheels for new orgs to come in organically, can we avoid some of that upheaval because they’ll actually more naturally move to paying people for these jobs in a more organic way.

    And I, right now we have a, I’m not sure we have that mindset in place. That’s a bit of my question.

    Diane Tavenner: More questions than answers. More questions than answers. Really. This has been, wow, really provocative.

    Michael Horn: Yeah. So let’s, let’s, let’s leave. We could go on for a while. Let’s leave the conversation here for the moment. Alex, A segment we have on the show as we wrap up always is things we’re reading, watching, listening to either inside work or we try to be outside of work. You know, podcasts, TV shows, movies, books, whatever it might be. What’s on your night table or in your ear or in front of your eyes right now that you might share with us.

    Alex Kotran: I’m reading a book about salt. It’s called Salt.

    Michael Horn: This came out a few years ago. Yeah. Yeah. My wife read it.

    Alex Kotran: Yeah, I’m actually reading it for the second time. But it is, you know, it’s interesting because we. It’s something that’s, like, now you take for granted. But, you know, there’s a time when, you know, wars were fought. You know, it sort of spurred entire new sorts of technologies around. Like, the Erie Canal was basically, you know, like, salt was a big component of, you know, why we even built the Erie Canal. It’s. It’s actually nicknamed a ditch that salt built, you know, spurring new mining techniques.

    Technology’s Interconnected Conversation

    Alex Kotran: And, you know, I just find it fascinating that, like, you know, there are these, like, technology is so interconnected not to bring it back. I know this is supposed to be outside, but all I read, I only read nonfiction, so it’s going to be connected in some way. I just, like, fascinated by, like, you know, there are these sort of, like, layers behind the scenes that we sometimes take for granted that, you know, can actually be, like, you know, quietly, you know, monumental. I think what’s cool about this moment with technology is it’s like everybody’s a part of this conversation. Like, before, it was, like, much more cloistered. And so I think that’s just, like, good. Even though, yes, there’s a lot of noise and hype and, you know, snake oil and all that stuff, but I think in general, like, we are better off by, like, having folks like you, like, asking folk, asking people for, like, you know, like, driving conversation about this and not just leaving it to a small group of experts to dictate.

    Diane Tavenner: So I think this is cheating, but I’ve done this one before. But I’m gonna cheat anyway because, as you know, Michael, because you hear me talk about it a lot, the. The one news source I religiously read is called Tangle News. It’s a newsletter now and a podcast. It’s grown like crazy since I first started listening. I love it. It’s like a startup.

    It started, I think when I started reading, it was like, under 50,000 subscribers or something. Now up half a million. Executive editor, Isaac Saul, who I’m going to say this about a news person I trust, which I think is just a miracle. And I’m bringing it up this week because he wrote a piece last Friday that, honestly, I had to break over a couple days because it was really brutal to read. That’s just a very honest accounting of where we are in this moment. The best piece I’ve heard, I’ve read or, or heard about it. And then on Monday, he did another piece where, you know, they do what’s the left saying? What’s the right saying? What’s his take? You know, what are dissenting opinions? I just love the format. I love what they’re doing.

    I was getting ready to write them a thank you note slash love letter, which I do periodically. And I thought I’d just say it on here.

    Michael Horn: I was gonna say now you can just excerpt this and send them a video clip.

    Diane Tavenner: So I hope, I hope people will check it out. I love, love, love the work they’re doing, and I think you will too.

    Michael Horn: I’m gonna go historical fiction. Diane, I’m like, surprising you multiple weeks in a row here, I think. Right? Yeah. Because, Alex, I’m like you. I’m normally just nonfiction all the time, but I don’t know. Tracy said you have to read this book, Brother’s Keeper by Julie Lee.

    It’s based on. It’s historical fiction based on a. About a family’s migration from North Korea to South Korea during the Korean War. It is a tear jerker. I was crying like, literally sobbing as I was reading last night. And Tracy was like, you OK? And I was like, I think I won’t get through the book. But I did, and it’s fantastic.

    So we’ll leave it there. But, Alex, huge thanks. You spurred a great conversation. Looking forward to picking up a bunch of these strands as we continue. And for all you listening again, keep the comments, questions coming. It’s spurring us to think through different aspects of this and invite other guests who have good answers or at least the right questions and signals we ought to be paying attention to. So we’ll see you next time on Class Disrupted.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Experts react to artificial intelligence plan – Campus Review

    Experts react to artificial intelligence plan – Campus Review

    Australia’s first national plan for artificial intelligence aims to upskill workers to boost productivity, but will leave the tech largely unregulated and without its own legislation to operate under.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • High quality learning means developing and upskilling educators on the pedagogy of AI

    High quality learning means developing and upskilling educators on the pedagogy of AI

    There’s been endless discussion about what students do with generative AI tools, and what constitutes legitimate use of AI in assessment, but as the technology continues to improve there’s a whole conversation to be had about what educators do with AI tools.

    We’re using the term “educators” to encompass both the academics leading modules and programmes and the professionals who support, enable and contribute to learning and teaching and student support.

    Realising the potential of the technologies that an institution invests in to support student success requires educators to be willing and able to deploy it in ways that are appropriate for their context. It requires them to be active and creative users of that technology, not simply following a process or showing compliance with a policy.

    So it was a bit worrying when in the course of exploring what effective preparation for digital learning futures could look like for our Capability for change report last year, it was noticeable how concerned digital and education leaders were about the variable digital capabilities of their staff.

    Where technology meets pedagogy

    Inevitably, when it comes to AI, some HE staff are enthusiastic early adopters and innovators; others are more cautious or less confident – and some are highly critical and/or just want it to go away. Some of this is about personal orientation towards particular technologies – there is a lively and important critical debate about how society comes into a relationship with AI technology and the implications for, well, the future of humanity.

    Some of it is about the realities of the pressures that educators are under, and the lack of available time and headspace to engage with developmental activity. As one education leader put it:

    Sometimes staff, they know that they need to change what they’re doing, but they get caught in the academic cycle. So every year it’s back to teaching again, really, really large groups of students; they haven’t had the time to go and think about how to do things differently.

    But there’s also an institutional strategic challenge here about situating AI within the pedagogic environment – recognising that students will not only be using it habitually in their work and learning, but that they will expect to graduate with a level of competence in it in anticipation of using AI in the workplace. There’s an efficiency question about how using AI can reprofile educator working patterns and workflows. Even if the prospect of “freeing up” lots of time might feel a bit remote right now, educators are clearly going to be using AI in interesting ways to make some of their work a bit more efficient, to surface insight from large datasets that might not otherwise be accessible, or as a co-creator to help enhance their thinking and practice.

    In the context of learning and teaching, educators need to be ready to go beyond asking “how do the tools work and what can I do with them?” and be prepared to ask and answer a larger question: “what does it mean for academic quality and pedagogy when I do?”

    As Tom Chatfield has persuasively argued in his recent white paper on AI and the future of pedagogy, AI needs to have a clear educative purpose when it is deployed in learning and teaching, and should be about actively enhancing pedagogy. Reaching this halcyon state requires educators who are not only competent in the technical use of the tools that are available but prepared to work creatively to embed those tools to achieve particular learning objectives within the wider framework and structures of their academic discipline. Expertise of this nature is not cheaply won – it takes time and resource to think, experiment, test, and refine.

    Educators have the power – and responsibility – to work out how best to harness AI in learning and teaching in their disciplines, but education leaders need to create the right environment for innovation to flourish. As one leader put it:

    How do we create an environment where we’re allowing people to feel like they are the arbiters of their own day to day, that they’ve got more time, that they’re able to do the things that they want to do?…So that’s really an excitement for me. I think there’s real opportunity in digital to enable those things.

    Introducing “Educating the AI generation”

    For our new project “Educating the AI generation” we want to explore how institutions are developing educator AI literacy and practice – what frameworks, interventions, and provisions are helpful and effective, and where the barriers and challenges lie. What sort of environment helps educators to develop not just the capability, but also the motivation and opportunity to become skilled and critical users of AI in learning and teaching? And what does that teach us about how the role of educators might change as the higher education learning environment evolves?

    At the discussion session Rachel co-hosted alongside Kortext advisor Janice Kay at the Festival of Higher Education earlier this month there was a strong sense among attendees that educating the AI generation requires universities to take action on multiple fronts simultaneously if they are to keep up with the pace of change in AI technology.

    Achieving this kind of agility means making space for risk-taking, and moving away from compliance-focused language to a more collaborative and exploratory approach, including with students, who are equally finding their feet with AI. For leaders, that could mean offering both reassurance that this approach is welcomed, and fostering spaces in which it can be deployed.

    In a time of such fast-paced change, staying grounded in concepts of what it means to be a professional educator can help manage the potential sense of threat from AI in learning and teaching. Discussions focused on the “how” of effective use of AI, and the ways it can support student learning and educator practice, are always grounded in core knowledge of pedagogy and education.

    On AI in assessment, it was instructive to hear student participants share a desire to be able to demonstrate learning and skills above and beyond what is captured in traditional assessment, and find different, authentic ways to engage with knowledge. Assessment is always a bit of a flashpoint in pedagogy, especially in constructing students’ understanding of their learning, and there is an open question on how AI technology can support educators in assessment design and execution. More prosaically, the risks to traditional assessment from large language models indicate that staff may need to spend proportionally more of their time on managing assessment going forward.

    Participants drew upon the experiences of the Covid pivot to emergency remote teaching and taking the best lessons from trialling new ways of learning and teaching as a useful reminder that the sector can pivot quickly – and well – when required. Yet the feeling that AI is often something of a “talking point” rather than an “action point” led some to suggest that there may not yet be a sufficiently pressing sense of urgency to kickstart change in practice.

    What is clear about the present moment is that the sector will make the most progress on these questions when there is sharing of thinking and practice and co-development of approaches. Over the next six months we’ll be building up our insight and we’d love to hear your views on what works to support educator development of AI in pedagogy. We’re not expecting any silver bullets, but if you have an example of practice to share, please get in touch.

    This article is published in association with Kortext. Join Debbie, Rachel and a host of other speakers at Kortext LIVE on Wednesday 11 February in London, where we’ll be discussing some of our findings – find out more and book your place here.

    Source link

  • The end of pretend – AI and the case for universities of formation

    The end of pretend – AI and the case for universities of formation

    I loved magic as a kid. Card tricks, disappearing coins, little felt rabbits in pretend top hats. “Now anyone can be a magician,” proclaimed the advert in the Argos catalogue. Ta da. Now that’s magic.

    I’d make pretend tickets, rearrange the seating in the front room, and perform shows for the family – slowly learning the dark arts of misdirection and manipulation along the way. When I performed, I generated pride.

    Over time I found that some of those skills could be used to influence people more generally – to make them feel better about themselves, to change their decisions, to trigger some kind of behaviour.

    Sometimes, I’d rationalise, as long as I was doing it for the right reasons, it was better if they didn’t know it was a trick. The end justified the means. Or did it?

    People love it when they know that magic is being performed as magic – the willing suspension of disbelief, the pleasure of being fooled by someone who’s earned the right to fool us. When they give permission to be illegitimately impressed, all is fine.

    But what they can’t stand is being lied to. We don’t like being deceived. Most political news in this country centres on who lied and about what. We’re obsessed with it.

    The cover-up is always worse than the crime, yet everyone still does it – they have to, they rationalise, to keep up, or to get permission. The gap between how things are and how we present them is the game.

    Once they’re in, it won’t matter that the sector painted an unobtainable picture of student life for applicants. Once the funding is secured, universities can fess up that it isn’t as good as government thought it would be later. Once the rules are published, better to ask for forgiveness over the impact on net migration – not permission.

    I think about that little magic set I got a lot, because so much of what AI does still sits for me in that “magic trick” space.

    Ta da. Look what it can do. Generate an essay, write a play, create some code, produce an image of the Pope in a puffer jacket. But the line between magic and lies is a slippery slope, because its number one use case is pretence.

    AI is used to lie – fake essays, fake expertise, fake competence. But mostly to make us look better, appear faster and seem wiser. The anxiety about being “found out” is the anxiety of the liar, not the audience at a magic show. Students worry they’ll be caught. Universities worry their degrees will be worthless.

    Everyone worries that the whole edifice of qualifications and signals and “I know something you don’t” will collapse under the weight of its own pretending. But the pretending was already there – AI just makes the tricks cheaper, and much harder to sustain.

    When I look back upon my life

    I’ve been in a particularly reflective mood recently – I turned 50 at the weekend (I can’t believe it either, it’s the moisturiser) and there’s something I can’t shake. When I look back upon my life, it’s always with a sense of shame.

    When I got accepted to the University of the West of England in the mid-nineties, grandparents on both sides were thrilled that I had “got into Bristol”. A few extra Bonusprint copies of the sunken lawn at the St Matthias campus helped.

    It hadn’t started as a deliberate lie – more a misunderstanding about where we had driven to on open days – but instead of correcting it, I doubled down.

    Nobody in my family had been to university, and I doubt they would have discerned the difference. But on some level I thought I had to prove that their financial support was for something rare. Something… special.

    Decades later I realised that the entire edifice of higher education runs on the same kind of slippage – the gap between what universities actually do, and the status they are assumed to have and confer.

    Applicants and their families celebrate “getting in” as if admission itself were the achievement. Parents frame graduation photos, the ceremony mattering more than the three years that preceded it. Employers use degree classifications as sorting mechanisms while moaning that the sort has not delivered the graduates they wanted. There’s a graduate premium. And so on.

    Those of us who write about higher education are no better. Our business model rests on “I know something you do not” – the insider knowledge, the things you haven’t noticed, the analysis you can’t get elsewhere. Scarcity of information, monetised. I’ve built a career on being the person in the room who has read the regulatory guidance.

    But now, suddenly, a machine can summarise the guidance in seconds. Not as well as I can – not yet, not always – but well enough to make me wonder what I am actually for. What value I bring. How good I am at… pretending.

    AI doesn’t create that anxiety. It exposes something that was always there – the fear that our value was never in what we knew, but in other people not knowing it. And that eventually, someone might find that out.

    It’s always with a sense of shame

    Back in 1995 my first (handwritten) university essay was about the way the internet lets you become someone you are not. Chatrooms were new and identity was suddenly fluid. You could lie about everything – your age, your appearance, your expertise – and checking was hard.

    The internet has been flooded with exaggeration ever since. Wish.com tat that looks nothing like the picture. LinkedIn profiles that bear no relationship to actual jobs. Influencers selling lives they don’t live in places that barely exist.

    But it has also liberated us. At UWE, I lived through the transition from index cards in libraries to DogPile, asking Jeeves and Google. The skill of navigating a card catalogue, of knowing which reference books to check – it felt essential, and then it was worthless. For one semester, we were told we weren’t allowed to use search engines. The faculty held on for a while, then let go.

    In my first year, I chose a module involving audio editing on reel-to-reel tape. Splicing, cutting, winding, knives. At the end of the year, I got a job helping to put the equipment in a skip. The skills I’d learned were obsolete before I graduated.

    Each time, there was a period of pretending that the old skills still mattered. Each time, the system eventually admitted they didn’t. Each time, something was revealed about what had actually been valuable all along. The card catalogue wasn’t the point – finding and evaluating information was. The handwriting wasn’t the point – thinking under pressure was. The reel-to-reel wasn’t the point – understanding how to shape a story with sound was.

    Now the sector clings on to exams, essays, and the whole apparatus of assessment that assumes that producing a thing proves you learned something. The system holds on – but for what?

    I’ve always been the one to blame

    If I rummage through the AI pitches that land at [email protected], I can see a familiar pattern.

    There are catastrophists. Students are cheating on an industrial scale. The essay is dead. Standards are collapsing and students are cognitively offloading while the great plagiarism machine works its magic.

    There are tech evangelists. Productivity gains, personalised learning, democratised access and emancipation – just so long as you don’t ask who is selling the tools, who is buying the data, or what happens to students who can’t afford the premium tier.

    Then there is the centrist-Dad middle. “It is neither all good nor all bad” – balance, nuance, thoughtful engagement, and very little about what any of this is actually for.

    The catastrophists are wrong because they assume what’s being bypassed was valuable – that the essay-writing, the exam-sitting, the problem-set-completing were the point rather than proxies for something else. If the activities can be replaced by a machine, what were they measuring?

    The evangelists are wrong because they assume more efficiency is always better – that if AI frees us from X, we’ll have more time to do Y. But they never say what Y is. Or whose time it becomes. In practice, we know – the efficiency dividend flows upward, and never shows up as an afternoon off.

    The balanced view is just as bad, because it pretends there’s no choice to be made. It lets us sound reasonable while avoiding the harder question – what is higher education for?

    At the high risk of becoming one of those bores at a conference whose “question” is a speech about that very issue, I do think there is a choice to be made. We ought at least to ask if universities exist to sort and qualify, or to form and transform. AI forces the question.

    For everything I long to do

    Let’s first admit a secret that would get me thrown out of the Magic Circle. The industrial model of education was built on scarcity, and scarcity made a certain kind of pretending possible.

    Information was scarce – held in libraries, transmitted by experts, accessible only to those who got through the door. A degree meant three years in proximity to information others could not reach.

    Attention was scarce – one lecturer, two hundred students, maybe a weekly seminar. The economics of mass higher education turned teaching into broadcast, not dialogue, but the scarcity, coupled with outcomes stats from the past, still conferred value.

    Feedback was scarce – assignments returned weeks later with a grade and a short paragraph. The delay and brevity made the judgement feel weighty, even oracular.

    In a scarcity system, hoarding makes sense. Knowledge is power precisely because others don’t have it. “I know something you do not” isn’t a bug – it’s the business model. But once something isn’t scarce any more, we have to search again for value.

    We’ve been here before. Calculators didn’t destroy maths – they revealed that arithmetic wasn’t the point. Google didn’t destroy research – it revealed that finding information wasn’t really the hard bit. Each time the anxiety was the same – students will cheat, standards will collapse, the thing we valued will be lost. Each time the pretending got harder to sustain.

    For me AI fits the pattern. Not because it knows everything – it obviously doesn’t. Its confident wrongness is one of its most dangerous features. But it makes a certain kind of information effectively free. Facts, frameworks, standard analyses are now available to anyone with an internet connection and the wit to ask.

    And it hurts to carefully build and defend systems that confer status on things humans can do – only to have something come along and relieve humans from having to do them. It causes a confrontation – with value.

    No matter when or where or who

    During the early days of Covid, I came across a Harvard Business School theory called Jobs To Be Done. People pay to get a job done, but organisations often misunderstand the real job they’re being paid to perform.

    As a kid, the Sinclair ZX Spectrum in our house was marketed as an educational tool – an invitation to become a programmer. Some did. Most, like me, worked out how to make the screen say rude words and then played games.

    Students have at least two jobs they want done. One is access to well-paid and meaningful work, made possible through obtaining a degree and supplied by academic programmes. The second is coming of age – the intoxicating combination of growing up and lifestyle. Becoming someone. Finding your people. Working out who you are when you’re not defined by your parents or your school.

    Universities have always provided both, but only dare attribute value to the first. The second is treated as incidental – “the student experience”, something that happens around the edges. But for many students, perhaps most, the second job is why they came. The qualification is the price of admission to three years of transformation.

    AI increasingly handles the first job – the information, the credentials, the sorting – more efficiently than universities ever could. If that were all universities offered, they’d already be obsolete. What AI can’t provide is the second job. It can’t help us become someone. It can’t introduce us to people who will change our lives. It can’t hold us accountable, or surprise us, or make us brave.

    During Covid, I argued that universities should cancel as much face-to-face teaching as possible – because it wasn’t working anyway – but keep campuses open. Not for teaching – for being. For studying together, bonding, bridging, belonging.

    I’ve not changed my view. AI just makes it more urgent. If the content delivery can be automated, the campus has to be for something else. That something else is formation.

    Has one thing in common, too

    A couple of years ago I came across Thomas Basbøll, resident writing consultant at Copenhagen Business School Library. He argues that when a human performs a cognitively sophisticated task – writes a compelling essay, analyses a complex case, synthesises disparate sources – we infer underlying competence. The performance becomes evidence of something deeper.

    When a machine performs the same task, we can’t make the inference. The machine has processes that produce outputs. It doesn’t “know” anything – it predicts tokens. The output might resemble what a knowledgeable human would produce, but it proves nothing about understanding.

    Education has always used performance as a proxy for competence. Higher education sets essays because it assumed that producing a good one required learning something. There was trust in the inference from output to understanding, and AI breaks it. The performance proves nothing.

    For many students, the performance was already disconnected from competence. Dave Cormier, from the University of Prince Edward Island, described the experience of essay writing in the search era as:

    have an argument, do a search for a quote that supports that position, pop the paper into Zotero to get the citation right, pop it in the paper. No reading for context. No real idea what the paper was even about.”

    There was always pretending. AI just automated it.

    Basbøll’s question still haunts me. What is it that we want students to be able to do on their own? Not “should we allow ChatGPT” – that battle is lost. What capacities, developed through practice and evidenced in assessment, do we actually care about?

    If the answer is that appearing literate is enough, then we might as well hand the whole thing to the machines. If the answer is that we want students to actually develop capacities, then universities will need to watch students doing things – synchronous engagement, supervised practice, assessment that can’t be outsourced. A shift that feels too resource-intensive for the funding model.

    What’s missing from both options is that neither is really about learning. One is about performing competence, the other is about proving competence under surveillance, but both still treat the output as the point. The system can’t ask what students actually learned, because it was never designed to find out. It was designed to sort.

    Everything I’ve ever done

    How hard should education be? The “meritocracy of difficulty” ties academic value to how hard a course is to survive – dense content, heavy workloads, high-stakes assessment used to filter and sort rather than support students. Go too far in the other direction, and it’s a pointless prizes-for-all game in which nobody learns a thing.

    Maybe the sorting and the signalling is the problem. The degree classification system was designed for an elite era where classification signalled that the graduate was better than other people. First class – exceptional. Third – joker. The whole apparatus assumes that the point of education is to prove that your Dad’s better than my Dad. See also the TEF.

    Everyone pretends about the workload. The credit system assumes thirty-five to forty hours per week for a full-time student. Students aren’t studying for anything like that. The gap is vast, everyone knows it, and nobody says it out loud because saying it would expose the fiction.

    AI intensifies it all. If students can automate the drudgery, they will – not because they’re lazy, but because they’re rational actors in a system that rewards outputs over process. If the system says “produce this essay” and the essay can be produced in ten minutes, why would anyone spend ten hours?

    Mark Twain might have said that he would never let his schooling interfere with his education. Today’s undergraduates would more often lament that they don’t can’t their lectures and seminars interfere with their part-time job that pays the rent.

    Every place I’ve ever been

    There’s a YouTube video about Czech railways that’s been stuck in my head for weeks now. They built a 200 km/h line between Prague and Budweis and held celebrations – the first domestic intercity service to break the 160 km/h barrier.

    But only one train per day actually runs at that speed. It arrives late every time. Passengers spend the whole journey anxious about missing their ten-minute connection at the other end.

    The Swiss do it differently. The Gotthard Base Tunnel was built for 230 km/h. Trains run at 200. The spare capacity isn’t wasted – it’s held in reserve. If a train enters the tunnel with a five-minute delay, it accelerates and emerges with only two. The tunnel eats delays. The result is connection punctuality of the kind where you almost always make your connection.

    The Czech approach is speed fetishism – make the easily marketable number bigger, and assume that’s improvement. The Swiss approach is reliability – build in slack, prioritise the journey over the metric, make sure people get where they’re going.

    It sometimes feels to me like UK universities have gone the Czech route. We’re the envy of the world on throughput – faster degrees, more students, tighter timetables, twelve-week modules with no room to fall behind.

    But when anything goes wrong – and things always go wrong – students miss their connections. A bad week becomes a failed module. A failed module becomes a resit year. A mental health crisis becomes a dropout. Then we blame them for lacking resilience, as if the problem were their character rather than a system designed with no slack.

    The formation model is the Swiss model. Slow down. Build in reserves. Let students recover from setbacks. Prioritise the journey over the metric. Accept that some things cannot be rushed.

    At school they taught me how to be

    Universities tell themselves similar lies about academics. It’s been obvious for a long time that the UK can’t sustain a system where researchers are also the teachers, the pastoral supporters, the markers and the administrators.

    The all-rounder academic – brilliant at research, compelling in lectures, attentive in tutorials, wise in pastoral care, efficient at marking, engaged in knowledge exchange – was always a fantasy, tolerable only when student numbers were small enough to hide the gaps.

    Massification stretched it. Every component became more complicated, with more onerous demands, while the mental model of what good looks like didn’t change. AI breaks it.

    If students automate essay production, academics can automate feedback. We’re already seeing AI marking tools that claim to do in seconds what takes hours. If both sides are pretending – students pretending to write, academics pretending to read – what’s left?

    The answer is – only the encounter. The tutorial where someone’s question makes you think again. The supervision where a half-formed idea gets taken seriously. The seminar where genuine disagreement produces genuine movement. The moments when people are present to each other, accountable to each other, and changed by each other.

    They can’t be automated. They also can’t be scaled in the way the current model demands. You can’t have genuine encounters at a ratio of one to two hundred. Nor can you develop judgement in a twelve-week module delivered to students whose names you don’t know.

    The alternative is differentiation – people who teach, people who research, people who coach, working in teams on longer-form problems rather than alone in offices marking scripts. But that requires admitting the all-rounder was always a lie, and restructuring everything around that admission.

    So pure in thought and word and deed

    If information is now abundant and feedback can be instant and personalised, then the scarcity model is dead. Good riddance. But abundance creates its own problems.

    Without judgement, abundance is useless. Knowing that something is the case is increasingly cheap. Any idiot with ChatGPT can generate an account of the causes of the First World War or the principles of contract law. But knowing what to do about it, whether to trust it, how it connects to everything else, which bits matter and which are noise – these remain expensive, slow, human.

    Judgement is not a skill you can look up. It’s a disposition you develop through practice – through getting things wrong and understanding why, through watching people who are better at it than you, through being held accountable by others who will tell you when you’re fooling yourself. AI can give us information. It can’t give us judgement.

    Abundance makes it harder to know what we don’t know. When information was scarce, ignorance was obvious. Now, ignorance is invisible. We can generate confident-seeming text on any topic without understanding anything about it. The gap between performance and competence widens.

    UCL’s Rose Luckin calls what’s needed “meta-intelligence” – not knowing things, but knowing how we know, knowing what we don’t know, and knowing how to find out. AI makes meta-intelligence more important, not less. If we can’t evaluate what the machine is giving us, we’re not using a tool. We’re being used by one.

    That’s the equity issue that most AI boosterism ignores. If you went to a school that taught you to think, AI is a powerful amplifier. If you went to a school that taught you to comply, AI is a way of complying faster without ever developing the capacities that would let you do otherwise.

    They didn’t quite succeed

    Cultivating judgement means designing curricula around problems that don’t have predetermined answers – not case studies where students are expected to reach the “right” conclusion, but genuine dilemmas where reasonable people disagree. It means assessment that rewards the quality of reasoning, not just the correctness of conclusions – teachers who model uncertainty, who think out loud, who change their minds in public.

    Creating communities of inquiry means spaces where people think together, are accountable to each other, and learn to be wrong in public. They can’t be scaled, and can’t be automated. They require presence, continuity, and trust built over time. AI can prepare us for these spaces. It can’t be one of them.

    Last week I was playing with a custom GPT with a group of student reps. We’d loaded it with Codes of Practice and housing law guidance, and for the first time they understood their rights as tenants – not deeply, not expertly, but enough to know what questions to ask and where to push back. They’d never have encountered this stuff otherwise.

    The custom GPT wasn’t the point – the curiosity it sparked was. They left wanting to know more, not less. That’s what democratised information synthesis can do when it’s not about producing outputs faster, but about opening doors others didn’t know existed.

    Father, forgive me

    There’s always been an irony in the complaint that graduates lack “soft skills”. For decades, employers demanded production – write the report, analyse the data, build the model. Universities obliged, orienting curricula around outputs and assessing students on their capacity to produce. Now that machines produce faster and cheaper, employers discover they wanted something else all along.

    They call it “soft skills” or “emotional intelligence” or “communication”. What they mean is the capacity to be present with other humans. To listen, to learn, to adapt – to work with people who are different from you, and to contribute to collective endeavours rather than produce outputs in isolation.

    It’s always irked me that they’re described as soft. They are the hardest skills to develop and the hardest to fake. They are also exactly what universities could have been cultivating all along – if anyone had been willing to name them and pay for them.

    Universities that grasp this can offer students, employers and society something they genuinely need – people who can think, who can learn, who can work with others, who can handle complexity and uncertainty. Employers will need to train them in their specific context, but they’ll be worth training. That’s a different value proposition than “job-ready graduates” – and a more honest one.

    I remember visiting the Saltire Centre at Glasgow Caledonian and being amazed that a university was brave enough to notice that students like studying together. Not just being taught together – studying together. The spaces that fill up fastest are the ones where people can work alongside others, help each other, and belong to something.

    It’s not a distraction from learning. It is learning. The same is true of SUs, societies, volunteering, representation – the “extracurricular” activities that universities tolerate but rarely celebrate. These are where students practise collective action, navigate difference, take responsibility for something beyond themselves. Formation happens in community, not just in classrooms.

    I tried not to do it

    Being brave enough to confront all this will be hard. The funding model rewards efficiency, the regulatory model rewards measurability, and the labour market wants qualifications. The incentive is to produce – people who can perform, not people who have developed.

    Students – many, not all – have internalised this logic. They want the degree, the credential, the signal. They are strategic, instrumental, and focused on outcomes. It’s not a character flaw – it’s a rational response to the system they’re in. If the degree is the point, then anything that gets you the degree efficiently is sensible. AI is just the latest efficiency tool.

    But while shame is a powerful disincentive to the fess up, the thing about pretending is that it’s exhausting. And it’s lonely.

    For years at Christmas, I pretended UWE was Bristol because I was ashamed – ashamed of wanting to study the media, ashamed of coming from a family where going to any university was exceptional, ashamed of the gap between where I was and where people felt I should be. The pretending was a way of managing the shame.

    I suspect a lot of students feel something similar. The performance of knowledge, the strategic deployment of qualifications, the constant positioning and comparison – these are ways of managing the fear that you’re not good enough, that you’ll be found out, that the gap between who you are and who you’re supposed to be is too wide to bridge.

    AI intensifies the fear for some – the terror that they’ll be caught, that the machine will be detected, that the pretending will be exposed. But it might offer a different possibility. If the pretending no longer works – if the performance can be automated and therefore has no value – then maybe the only thing left is to become someone who doesn’t need to pretend.

    And I still don’t understand

    That is the democratic promise of abundant information. Not that everyone will know everything – that’s neither possible nor desirable. But that knowledge can stop being a marker of status, a way of putting others down, or a resource to be hoarded. “I know something you don’t” can give way to “we can figure this out together.”

    The shift from knowledge as possession to knowledge as practice is a shift from “I have information you lack” to “I can work with you on problems that matter.” From education as credentialing to education as formation. From “I’m better than you” to “I can contribute.” From pretending to becoming.

    We’d need assessment that rewards contribution over reproduction. If the essay can be generated by AI, then the essay is testing the wrong thing. Assessment that requires students to think in real time, in dialogue, in response to genuine challenge – this is harder to automate and more valuable to develop. The individual student writing the individual essay marked by the individual academic is game over if AI can play both roles.

    We’ll need pedagogy that prioritises encounter over transmission. Small group teaching. Sustained relationships between students and teachers. Curricula designed around problems rather than content coverage. Something between a module and a course, run by teams, with long-form purpose over a year rather than twelve-week fragments. Time and space for the slow work of formation.

    We’ll need recognition that learning is social. Common spaces where students can study together. Student organisations supported rather than tolerated. Credit for service learning, for contribution to community, for the “extracurricular” activities where formation actually happens.

    We’ll need slack in the system. The Swiss model, not the Czech one. Space to fall behind and catch up. Multiple attempts at assessment. Pass/fail options that encourage risk-taking. Time built in for things to go wrong, because things always go wrong. A system that absorbs delays rather than compounding them.

    None of this will happen quickly. The funding model, the regulatory model, the labour market, the expectations students bring with them – they are not going to transform overnight. We’ll all have to play along for a while yet, doing the best we can within systems that reward the wrong things.

    But playing along is not the same as believing. And knowing what we’re playing along with – knowing what we’re compromising and why – is the beginning of something different.

    The end of pretending

    The reason I came to work here at Wonkhe – and the whole point of my work with students’ unions over the years – has been about giving power away. Not hoarding insight, but spreading it. Not being the person who knows things – but helping other people act on what they now know.

    The best email I got last week wasn’t someone telling me that I was impressive, or clever. I’ve learned how to get those emails. It was someone saying “really great notes and really great meeting – has got our brains whirring a lot.” Using what I offered to do something I couldn’t have done myself.

    Maybe I’ve become one of those insufferable men who grab the mic to assert that what education is for is what it did for them. But the purpose of teaching is surely rousing curiosity and creating the conditions for people to become.

    When I look back at the version of myself who told his family he was going to Bristol, I feel compassion more than embarrassment. He was doing the best he could in a system that made pretending rational.

    Thirty years on, I’ve watched skills become obsolete, formats get put in the skip, pretences exposed. Each time we held on for a while. Each time we eventually let go. Each time something was revealed about what had actually mattered all along.

    AI doesn’t end the system of pretending. But it does expose its contradictions in ways that might, eventually, make something better possible. If the performance of knowledge becomes worthless, then maybe actual formation – and the human encounters that produce it – can finally be valued.

    The hopeful answer is that universities can be places where people become more fully human. Not because they acquire more information, or even because they become subject specialists – though many will – but because they develop the capacities for thought, action, connection and care that make a human life worth living.

    They are capacities that can’t be downloaded, nor automated, nor faked. They can be developed only slowly, in relationship, through practice, with friction.

    You came to university for skills and they turned out to be useless? That’s a trick. You came for skills and left ready to change the world? Now that’s magic.

    Continue the conversation at The Secret Life of Students: Learning to be human in the age of AI – 17 March, London. Find out more and book.

    Source link

  • AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    AI is unlocking insights from PTES to drive enhancement of the PGT experience faster than ever before

    If, like me, you grew up watching Looney Tunes cartoons, you may remember Yosemite Sam’s popular phrase, “There’s gold in them thar hills.”

    In surveys, as in gold mining, the greatest riches are often hidden and difficult to extract. This principle is perhaps especially true when institutions are seeking to enhance the postgraduate taught (PGT) student experience.

    PGT students are far more than an extension of the undergraduate community; they represent a crucial, diverse and financially significant segment of the student body. Yet, despite their growing numbers and increasing strategic importance, PGT students, as Kelly Edmunds and Kate Strudwick have recently pointed out on Wonkhe, remain largely invisible in both published research and core institutional strategy.

    Advance HE’s Postgraduate Taught Experience Survey (PTES) is therefore one of the few critical insights we have about the PGT experience. But while the quantitative results offer a (usually fairly consistent) high-level view, the real intelligence required to drive meaningful enhancement inside higher education institutions is buried deep within the thousands of open-text comments collected. Faced with the sheer volume of data the choice is between eye-ball scanning and the inevitable introduction of human bias, or laborious and time-consuming manual coding. The challenge for the institutions participating in PTES this year isn’t the lack of data: it’s efficiently and reliably turning that dense, often contradictory, qualitative data into actionable, ethical, and equitable insights.

    AI to the rescue

    The application of machine learning AI technology to analysis of qualitative student survey data presents us with a generational opportunity to amplify the student voice. The critical question is not whether AI should be used, but how to ensure its use meets robust and ethical standards. For that you need the right process – and the right partner – to prioritise analytical substance, comprehensiveness, and sector-specific nuance.

    UK HE training is non-negotiable. AI models must be deeply trained on a vast corpus of UK HE student comments. Without this sector-specific training, analysis will fail to accurately interpret the nuances of student language, sector jargon, and UK-specific feedback patterns.

    Analysis must rely on a categorisation structure that has been developed and refined against multiple years of PTES data. This continuity ensures that the thematic framework reflects the nuances of the PGT experience.

    To drive targeted enhancement, the model must break down feedback into highly granular sub-themes – moving far beyond simplistic buckets – ensuring staff can pinpoint the exact issue, whether it falls under learning resources, assessment feedback, or thesis supervision.

    The analysis must be more than a static report. It must be delivered through integrated dashboard solutions that allow institutions to filter, drill down, and cross-reference the qualitative findings with demographic and discipline data. Only this level of flexibility enables staff to take equitable and targeted enhancement actions across their diverse PGT cohorts.

    When these principles are prioritised, the result is an analytical framework specifically designed to meet the rigour and complexity required by the sector.

    The partnership between Advance HE, evasys, and Student Voice AI, which analysed this year’s PTES data, demonstrates what is possible when these rigorous standards are prioritised. We have offered participating institutions a comprehensive service that analyses open comments alongside the detailed benchmarking reports that Advance HE already provides. This collaboration has successfully built an analytical framework that exemplifies how sector-trained AI can deliver high-confidence, actionable intelligence.

    Jonathan Neves, Head of Research and Surveys, Advance HE calls our solution “customised, transparent and genuinely focused on improving the student experience, “ and adds, “We’re particularly impressed by how they present the data visually and look forward to seeing results from using these specialised tools in tandem.”

    Substance uber alles

    The commitment to analytical substance is paramount; without it, the risk to institutional resources and equity is severe. If institutions are to derive value, the analysis must be comprehensive. When the analysis lacks this depth institutional resources are wasted acting on partial or misleading evidence.

    Rigorous analysis requires minimising what we call data leakage: the systematic failure to capture or categorise substantive feedback. Consider the alternative: when large percentages of feedback are ignored or left uncategorised, institutions are effectively muting a significant portion of the student voice. Or when a third of the remaining data is lumped into meaningless buckets like “other,” staff are left without actionable insight, forced to manually review thousands of comments to find the true issues.

    This is the point where the qualitative data, intended to unlock enhancement, becomes unusable for quality assurance. The result is not just a flawed report, but the failure to deliver equitable enhancement for the cohorts whose voices were lost in the analytical noise.

    Reliable, comprehensive processing is just the first step. The ultimate goal of AI analysis should be to deliver intelligence in a format that seamlessly integrates into strategic workflows. While impressive interfaces are visually appealing, genuine substance comes from the capacity to produce accurate, sector-relevant outputs. Institutions must be wary of solutions that offer a polished facade but deliver compromised analysis. Generic generative AI platforms, for example, offer the illusion of thematic analysis but are not robust.

    But robust validation of any output is still required. This is the danger of smoke and mirrors – attractive dashboards that simply mask a high degree of data leakage, where large volumes of valuable feedback are ignored, miscategorised or rendered unusable by failing to assign sentiment.

    Dig deep, act fast

    When institutions choose rigour, the outcomes are fundamentally different, built on a foundation of confidence. Analysis ensures that virtually every substantive PGT comment is allocated to one or more UK-derived categories, providing a clear thematic structure for enhancement planning.

    Every comment with substance is assigned both positive and negative sentiment, providing staff with the full, nuanced picture needed to build strategies that leverage strengths while addressing weaknesses.

    This shift from raw data to actionable intelligence allows institutions to move quickly from insight to action. As Parama Chaudhury, Pro-Vice Provost (Education – Student Academic Experience) at UCL noted, the speed and quality of this approach “really helped us to get the qualitative results alongside the quantitative ones and encourage departmental colleagues to use the two in conjunction to start their work on quality enhancement.”

    The capacity to produce accurate, sector-relevant outputs, driven by rigorous processing, is what truly unlocks strategic value. Converting complex data tables into readable narrative summaries for each theme allows academic and professional services leaders alike to immediately grasp the findings and move to action. The ability to access categorised data via flexible dashboards and in exportable formats ensures the analysis is useful for every level of institutional planning, from the department to the executive team. And providing sector benchmark reports allows institutions to understand their performance relative to peers, turning internal data into external intelligence.

    The postgraduate taught experience is a critical pillar of UK higher education. The PTES data confirms the challenge, but the true opportunity lies in how institutions choose to interpret the wealth of student feedback they receive. The sheer volume of PGT feedback combined with the ethical imperative to deliver equitable enhancement for all students demands analytical rigour that is complete, nuanced, and sector-specific.

    This means shifting the focus from simply collecting data to intelligently translating the student voice into strategic priorities. When institutions insist on this level of analytical integrity, they move past the risk of smoke and mirrors and gain the confidence to act fast and decisively.

    It turns out Yosemite Sam was right all along: there’s gold in them thar hills. But finding it requires more than just a map; it requires the right analytical tools and rigour to finally extract that valuable resource and forge it into meaningful institutional change.

    This article is published in association with evasys. evasys and Student Voice AI are offering no-cost advanced analysis of NSS open comments delivering comprehensive categorisation and sentiment analysis, secure dashboard to view results and a sector benchmark report. Click here to find out more and request your free analysis.

    Source link

  • The promise and challenge of AI in building a sustainable future

    The promise and challenge of AI in building a sustainable future

    It is tempting to regard AI as a panacea for addressing our most urgent global challenges, from climate change to resource scarcity. Yet the truth is more complex: unless we pair innovation with responsibility, the very tools designed to accelerate sustainability may exacerbate its contradictions.

    A transformative potential

    Let us first acknowledge how AI is already reshaping sustainable development. By mapping patterns in vast datasets, AI enables us to anticipate environmental risks, optimise resource flows and strengthen supply chains. Evidence suggests that by 2030, AI systems will touch the lives of more than 8.5 billion people and influence the health of both human and natural ecosystems in ways we have never seen before. Research published in Nature indicates that AI could support progress towards 79% of the Sustainable Development Goals (SDGs), helping advance 134 specific targets. Yet the same research also cautions that AI may impede 59 of those targets if deployed without care or control.

    In practice, this means smarter energy grids that balance load and demand, precision agriculture that reduces fertiliser waste and environmental monitoring systems that detect deforestation or pollution in real time. For a planet under pressure, these scenarios offer hope to do less harm and build more resilience.

    The hidden costs

    Even so, we must confront the shadows cast by AI’s advancements. An investigation published earlier this year warns that AI systems could account for nearly half of global data-centre power consumption before the decade’s end. Consider the sheer scale: vast server arrays, intensive cooling systems, rare-earth mining and water-consuming infrastructure all underpin generative AI’s ubiquity. Worse still, indirect carbon emissions tied to major AI-capable firms reportedly rose by 150% between 2020 and 2023. In short, innovation meant to serve sustainability imposes a growing ecological burden.

    Navigating trade-offs

    This tension presents an essential question: how can we reconcile AI’s promise with its cost? Scholars warn that we must move beyond the assumption that AI for good’ is always good enough. The moment demands a new discipline of sustainable AI’: a framework that treats resource use, algorithmic bias, lifecycle impact and societal equity as first-order concerns.

    Practitioners must ask not only what AI can do, but how it is built, powered, governed and retired. Efficiency gains that drive consumption higher will not deliver sustainability; they may merely escalate resource demands in disguise.

    A moral and strategic imperative

    For educators, policymakers and business leaders, this is more than a technical issue; it is a moral and strategic one. To realise AI’s true potential in advancing sustainable development, we must commit to three priorities:

    Energy and resource transparency: Organisations must measure and report the footprint of their AI models, including data-centre use, water cooling, e-waste and supply-chain impacts. Transparency is foundational to accountability.

    Ethical alignment and fairness: AI must be trained and deployed with due regard to bias, social impact and inclusivity. Its benefits must not reinforce inequality or externalise environmental harms onto vulnerable communities.

    Integrative education and collaboration: We need multidisciplinary expertise, engineers fluent in ecology, ethicists fluent in algorithms and managers fluent in sustainability. Institutions must upskill young learners and working professionals to orient AI within the broader context of planetary boundaries and human flourishing.

    MLA College’s focus and contribution

    At MLA College, we recognise our role in equipping professionals at this exact intersection. Our programs emphasise the interrelationship between technology, sustainability and leadership. Graduates of distance-learning and part-time formats engage with the complexities of AI, maritime operations, global sustainable development and marine engineering by bringing insight to sectors vital to the planet’s future.

    When responsibly guided, AI becomes an amplifier of purpose rather than a contraption of risk. Our challenge is to ensure that every algorithm, model and deployment contributes to regenerative systems, not extractive ones.

    The promise of AI is compelling: more accurate climate modelling, smarter cities, adaptive infrastructure and just-in-time supply chains. But the challenge is equally formidable: rising energy demands, resource-intensive infrastructures and ungoverned expansion.

    When responsibly guided, AI becomes an amplifier of purpose rather than a contraption of risk

    Our collective role, as educators and practitioners, is to shape the ethical architecture of this era. We must ask whether our technologies will serve humanity and the environment or simply accelerate old dynamics under new wrappers.

    The verdict will not be written on lines of code or boardroom decisions alone. It will be inscribed in the fields that fail to regenerate, in the communities excluded from progress, in the data centres humming with waste and in the next generation seeking meaning in technology’s promise.

    About the author: Professor Mohammad Dastbaz is the principal and CEO of MLA College, an international leader in distance and sustainability-focused higher education. With over three decades in academia, he has held senior positions including deputy vice-chancellor at the University of Suffolk and pro vice-chancellor at Leeds Beckett University.

    A Fellow of the British Computer Society, the Higher Education Academy, and the Royal Society of Arts, Professor Dastbaz is a prominent researcher and author in the fields of sustainable development, smart cities, and digital innovation in education.

    His latest publication, Decarbonization or Demise – Sustainable Solutions for Resilient Communities (Springer, 2025), brings together cutting-edge global research on sustainability, climate resilience, and the urgent need for decarbonisation. The book builds on his ongoing commitment to advancing the UN Sustainable Development Goals through education and research.

    At MLA College, Professor Dastbaz continues to lead transformative learning initiatives that combine academic excellence with real-world impact, empowering students to shape a sustainable future.

    Source link

  • Adopting AI across an institution is a pressing leadership challenge

    Adopting AI across an institution is a pressing leadership challenge

    Artificial intelligence is already reshaping higher education fast. For universities aiming to be AI-first institutions, leadership, governance, staff development, and institutional culture are critical.

    How institutions respond now will determine whether AI enhances learning or simply reinforces existing inequalities, inefficiencies and, frankly, bad practices. This is not only an institutional or sector question but a matter of national policy: government has committed to supporting AI-skills at scale, and the UK has pledged an early ambition that a “fifth of the workforce will be supported with the AI skills they need to thrive in their jobs.” Strategic deployment of AI is therefore a pressing HE leadership question.

    Whole institution AI leadership and governance

    Universities will benefit from articulating a clear AI-first vision that aligns with their educational, research and civic missions. Leadership plays a central role in ensuring AI adoption supports educational quality, innovation and equity rather than focusing purely on operational efficiency or competitiveness. Cultivating a culture where AI is viewed as a collaborative partner helps staff become innovators shaping AI integration rather than passive users (as the jargon frames it, “makers” not “takers”). Strategic plans and performance indicators should reflect commitments to ethical, responsible, and impactful AI deployment, signalling to staff and students that innovation and integrity go hand in hand.

    Ethical and transparent leadership in AI-first institutions is vital. Decision-making, whether informed by student analytics like Kortext StREAM, enrolment forecasts, budgeting, or workforce planning, should model responsible AI use. The right governance structures need to be created. Far be it from us to suggest more committees, but there needs to be governance oversight through ethics and academic quality boards to oversee AI deployment across the education function.

    Clear frameworks for managing data privacy, intellectual property, and algorithmic bias are essential, particularly when working with third-party providers. Maintaining dialogue with accreditation and quality assurance bodies including PSRBs and OfS ensures innovation aligns with regulatory expectations, avoiding clashes between ambition and oversight. This needs to be at individual institution, but also at sector and regulator level.

    Capability and infrastructure development

    Staff capability underpins any AI-first strategy. This needs to be understood through taking a whole institution approach rather than just education-facing staff. Defining a framework of AI competencies will help to clarify the skills needed to use AI responsibly and effectively, and there are already institutional frameworks, including from Jisc, QAA, and Skills England, that do this. Embedding these competencies into recruitment, induction, appraisal, promotion and workload frameworks can ensure that innovation is rewarded, not sidelined.

    Demonstrating AI literacy and ethical awareness could become a requirement for course leadership, or senior appointments. Adjusting workload models to account for experimentation, retraining, and curriculum redesign gives staff the space to explore AI responsibly. Continuous professional development – including AI learning pathways, ethics training, and peer learning communities – reinforces a culture of innovation while protecting academic quality.

    Investment in AI-enabled infrastructure underpins an AI-first institution. We recognise the severe financial challenges faced by many institutions and this means that investments must be well targeted and implemented effectively. Secure data environments, analytics platforms, and licensed AI tools accessible to staff and students are essential to provide the foundation for innovation. Ethical procurement practices when partnering with edtech providers promote transparency, accessibility, and academic independence. Universities should also consider the benefits and risks of developing their own large language models alongside relying on external platforms, weighing in factors such as cost, privacy, and institutional control. See this partnership between Kortext, Said Business School, Microsoft and Instructure for an example of an innovative new education partnership.

    Culture and change management

    Implementing AI responsibly requires trust. Leaders need to communicate openly about AI’s opportunities and limitations, critically addressing staff anxieties about displacement or loss of autonomy. Leadership development programmes for PVCs, deans, heads of school, and professional service directors can help manage AI-driven transformation effectively.

    One of the most important things to get right is to ensure that cross-functional collaboration between IT, academic development, HR, and academic quality units supports coherent progress toward an AI-first culture. Adopting iterative change management – using pilot programs, consultation processes, and rapid feedback loops well – allows institutions to refine AI strategies continuously, balancing innovation with oversight.

    AI interventions benefit from rigorous quantitative and qualitative evaluation. Indicators such as efficiency, student outcomes, creativity, engagement, and inclusion can offer a balanced picture of impact. Regular review cycles ensure responsiveness to emerging AI capabilities and evolving educational priorities. Publishing internal (and external) reports on AI impacts on education will be essential to promote transparency, sharing lessons learned and guiding future development. It almost goes without saying that institutions should share practice (what has worked and what hasn’t) not only within their organisations, but also across the sector and with accrediting bodies and regulators.

    An AI-first university places human judgment, ethics, and pedagogy at the centre of all technological innovation. AI should augment rather than replace intellectual and creative capacities of educators and students. Every intervention must benefit from assessment against these principles, ensuring technology serves learning, rather than it becoming the master of human agency or ethical standards.

    Being an AI-first institution is certainly not about chasing the latest tools or superficially focusing on staff and student “AI literacy.” It is about embedding AI thoughtfully in every part of the university. Leaders need to articulate vision, model ethical behaviour, build staff capacity and student ability to become next generation AI leaders. Staff and students need time, support and trust to experiment responsibly. Infrastructure and external partnerships must be strategic and principled. There must also be continuous evaluation to ensure that innovation aligns with strategy and values.

    When implemented carefully, AI can become a collaborative partner in enhancing learning, facilitating creativity and reinforcing the academic mission rather than undermining it.

    This article is published in association with Kortext. Join Janice and Rachel for Kortext LIVE on 11 February in London, on the theme of “Leading the next chapter of digital innovation” to continue the conversation on AI and data. Keynote speakers include Mark Bramwell, CDIO at Said Business School. Find out more and secure your spot here

    Source link

  • In learning, AI must become a co-creator, not a shortcut

    In learning, AI must become a co-creator, not a shortcut

    AI in all its multitudinous forms is here, it is here to stay, and its impacts are accelerating.

    At a basic level, we see shifts in personal office practices with the tentative, steady adoption of large language models. We see AI being used alongside MS Teams or its equivalents to quickly produce summary transcripts of meetings or to generate starting places for documents which are then reworked.

    As university educators and researchers, we also see debates regarding the ethics of AI adoption and a splintering ability of the collective and the individual to be able to discern fact from fiction. We are at the start of a long and unpredictable trajectory of impacts.

    But, as we shape the skills, knowledge and abilities in our students that will see them thrive in an increasingly disrupted future world of work, where that track takes us is a subject of debate. What is consistently clear across various predictions is that the adoption of AI and increasing automation will deliver seismic changes to the world (of work).

    Machine meets human

    86 per cent of employers surveyed for the World Economic Forum’s 2025 Future of Jobs Survey saw AI and information processing technologies as being the dominant technology driver for workplace change through to 2030, affecting workplaces across all sectors, not just those welcoming students from STEM disciplines. Similarly, the same survey indicates the greatest rise in demand in the workplace through to 2030 is for the ability to work with AI and big data.

    Noting the dominance of AI in the WEF survey findings, we are reminded of the 1998 interview between Jeremy Paxman and David Bowie, happening just as the internet was forming. Paxman queries the internet as being anything more than a “different delivery system,” while Bowie asserts that it is an alien life form:

    I don’t think we’ve even seen the tip of the iceberg – the things it will do, both good and bad are unimaginable right now. I actually think we’re on the cusp of something exhilarating and terrifying…

    Looking back at what has happened to society in the quarter of a century since that interview, Bowie is unnervingly accurate in his foresight.

    It seems that right now we are navigating similarly uncharted territories of an epoch-defining transition as the world starts to play in earnest with the next gen version of Bowie’s “alien lifeform.” Higher education is not immune – it is grappling with the challenges across its core activities.

    However, what is of particular interest beyond the specific AI skills is the other in-demand skills that occupy the places immediately following the top three noted above. Fourth is creative thinking, followed by resilience, flexibility and agility, curiosity and lifelong learning and leadership and social influence. These skills are high value cognitive competencies inherently human in their nature – an equalising “soft” counterbalance to the “hard” technological literacies of the top three.

    Reflecting on the duality between technological literacy and social, emotional and cognitive skills in this overall picture, it is clear that AI is not a replacement for the work of thought, deduction, critical reasoning and curiosity. Instead, it is a powerful augmentation to the already formidable arsenal of technological capability at our fingertips.

    From efficiency to co-creation

    With education and the student experience in mind, we see two AI “swim lanes” forming out of the early stages of ubiquity ushered in by the popularisation of ChatGPT and other LLMs. These swim lanes should also acknowledge the broader mix of new and emergent technologies at play in tandem with AI – for instance AR/VR and data visualisation.

    The first swim lane speaks to the need to optimise the complex wiring behind the institutional operations of higher education which provide our students with a world class experience. With efficiency, effectiveness and scale in mind, adoption of AI to underpin the crucial in-person experience with wider algorithmic personalisation becomes a highly desirable direction of travel. For instance, we can easily envisage a world in which AI is used to aid student navigation of module choice, tailoring the availability of elective courses and complementary extra-curricular and developmental activities.

    The second swim lane is one of invention and co-creation, arguably pushing AI and the wider ecosystem of technological innovation to be the best it can be – far beyond the deployment of convenience or efficiency. At its best, AI can become a partner in creativity: an inspirer and a critical collaborator offering new perspectives. We are seeing promising points of innovation and departure in the early work at Loughborough as the range of technological capabilities within our DigiLabs continues to be adopted at pace.

    However, to swim confidently in this lane we must dispel myths and fears with rigour and a critical navigation of AI as a co-creator. Scaffolding and skills development for staff and students are essential in order that we all might partner effectively with our new playmate.

    Thinking together

    Two points of skills development show themselves as a useful starting place towards consistency, innovation and collaboration in AI partnership. First, a good place to start would be recognition and development of prompt engineering as a fundamental digital skill and a shared structured practice. Second, it would be useful to focus on development of a consistent and structured means to better understand, interrogate and critically evaluate what the AI has generated in response to our prompting.

    With frameworks for these two essentials of effective AI partnership in place, we can move beyond the cut-and-paste AI-as-shortcut, and beyond the simple fact checking of generated material. These two skills move us towards conversing and exchanging perspectives with AI, making content better together. The vantage point of having embedded these two AI partnership skills helps us then systematically inculcate the true value of AI by recognising the human skillset with which to strategically cocreate with it, rather than shortcut with it.

    As our use of AI evolves, we should continually remind ourselves that understanding is not gained in the endpoint, but in travelling to that place (no student learns that much in the moment of a final assessment). AI becomes a meaningful companion on that journey, not a replacement for the experience of travelling. To shortcut the pleasure and frustration of our own creative and critical journeys by virtue of AI laziness is to deny ourselves the experience of our own essence – the struggle and the unknowing of what it means to question, to be alive and to be human.

    Source link

  • Dialogic assessments are the missing piece in contemporary assessment debates

    Dialogic assessments are the missing piece in contemporary assessment debates

    When I ask apprentices to reflect on their learning in professional discussions, I often hear a similar story:

    It wasn’t just about what I knew – it was how I connected it all. That’s when it clicked.

    That’s the value of dialogic assessment. It surfaces hidden knowledge, creates space for reflection, and validates professional judgement in ways that traditional essays often cannot.

    Dialogic assessment shifts the emphasis from static products – the essay, the exam – to dynamic, real-time engagement. These assessments include structured discussions, viva-style conversations, or portfolio presentations. What unites them is their reliance on interaction, reflection, and responsiveness in the moment.

    Unlike “oral exams” of old, these conversations require learners to explain reasoning, apply knowledge, and reflect on lived experience. They capture the complex but authentic process of thinking – not just the polished outcome.

    In Australia, “interactive orals” have been adopted at scale to promote integrity and authentic learning, with positive feedback from staff and students. Several UK universities have piloted viva-style alternatives to traditional coursework with similar results. What apprenticeships have long taken for granted is now being recognised more widely: dialogue is a powerful form of assessment.

    Lessons from apprenticeships

    In apprenticeships and work-based learning, dialogic assessment is not an add-on – it’s essential. Apprentices regularly take part in professional discussions (PDs) and portfolio presentations as part of both formative and end-point assessment.

    What makes them so powerful? They are inclusive, as they allow different strengths to emerge. Written tasks may favour those fluent in academic conventions, while discussions reveal applied judgement and reflective thinking. They are authentic, in that they mirror real workplace activities such as interviews, stakeholder reviews, and project pitches. And they can be transformative – apprentices often describe PDs as moments when fragmented knowledge comes together through dialogue.

    One apprentice told me:

    It wasn’t until I talked it through that I realised I knew more than I thought – I just couldn’t get it down on paper.

    For international students, dialogic assessment can also level the playing field by valuing applied reasoning over written fluency, reducing the barriers posed by rigid academic writing norms.

    My doctoral research has shown that PDs not only assess knowledge but also co-create it. They push learners to prepare more deeply, reflect more critically, and engage more authentically. Tutors report richer opportunities for feedback in the process itself, while employers highlight their relevance to workplace practice.

    And AI fits into this picture too. When ChatGPT and similar tools emerged in late 2022, many feared the end of traditional written assessment. Universities scrambled for answers – detection software, bans, or a return to the three-hour exam. The risk has been a slide towards high-surveillance, low-trust assessment cultures.

    But dialogic assessment offers another path. Its strength is precisely that it asks students to do what AI cannot:

    • authentic reflection, as learners connect insights to their own lived experience.
    • real-time reasoning – learners respond to questions, defend ideas, and adapt on the spot.
    • professional identity, where the kind of reflective judgement expected in real workplaces is practised.

    Assessment futures

    Scaling dialogic assessment isn’t without hurdles. Large cohorts and workload pressures can make universities hesitant. Online viva formats also raise equity issues for students without stable internet or quiet environments.

    But these challenges can be mitigated: clear rubrics, tutor training, and reliable digital platforms make it possible to mainstream dialogic formats without compromising rigour or inclusivity. Apprenticeships show it can be done at scale – thousands of students sit PDs every year.

    Crucially, dialogic assessment also aligns neatly with regulatory frameworks. The Office for Students requires that assessments be valid, reliable, and representative of authentic learning. The QAA Quality Code emphasises inclusivity and support for learning. Dialogic formats tick all these boxes.

    The AI panic has created a rare opportunity. Universities can either double down on outdated methods – or embrace formats that are more authentic, equitable, and future-oriented.

    This doesn’t mean abandoning essays or projects altogether. But it could mean ensuring every programme includes at least one dialogic assessment – whether a viva, professional discussion, or reflective dialogue.

    Apprenticeships have demonstrated that dialogic assessments are effective. They are rigorous, scalable, and trusted. Now is the time for the wider higher education sector to recognise their value – not as a niche alternative, but as a core element of assessment in the AI era.

    Source link