Author: admin

  • Just 329 students with an EHCP got to a high tariff provider last year

    Just 329 students with an EHCP got to a high tariff provider last year

    Everyone who can benefit from higher education deserves to do so. That’s pretty much what people remember the Robbins report as saying – and it is a comforting story that higher education likes to tell itself.

    But it doesn’t really hold true in the experiences of an increasingly diverse pool of potential applicants.

    The state of the art of supporting and regulating fair access to (and participation in) higher education in England has moved far beyond the (rather unsophisticated) idea of national targets and metrics. Like it or loathe it, the risk-based approach taken by the Office for Students is commendably grounded both in the experience of individual students and the academic literature.

    However a weakness of this approach is the temptation to argue that any access gaps represent a failure of higher education providers rather than taking a whole system (educational and, indeed socio-economic) perspective. When we do glance at wider problems with, say schools attainment it may not always be universities that are best placed (or adequately supported) to address them.

    And let us not be coy here – there are gaps:

    [Full screen]

    The chart shows progression rates to HE, either to all providers or “high tariff providers” (of which more later) for each year since 2009-10. The size of the dots represent the number of students in that population, the colours represent the groups of characteristics: you get everything from measures of economic disadvantage, to ethnicity, to disability and – new for this year – care experience. We are looking at the students that might usually be expected to enter HE that academic year (so the cohort that turned 18 the previous year – those who took a year out before university or who progress after resits will not be shown as progression to HE).

    SEN and EHCP

    There’s thousands of potential stories in this data – for this article I’m going to focus on special educational needs (SEN) as a factor influencing progression.

    As you can see from the chart 21.1 per cent of students with any special educational need progressed to higher education by the age of 19 in 2023-24. This is the highest on record, but before you break open the champagne we should add that the progression rate for their peers without SEN was more than 50 per cent. And for progression to high tariff providers the gap is even starker: 14.9 per cent without SEN, 3.8 with.

    Though a traditional image of a student with SEN may be of someone who is less academically able, there are many very academically inclined students who have SEN and are able to progress to any destination you can think of if they can access the right support. Support is not exactly easy to come by, and it is very much a lottery whether support is available to a particular child or not. Progression to any higher education setting by 19 was 25.4 per cent for those with SEN who had more generalised support, and just 9.4 for those who managed to get an education, health, and care plan (EHCP).

    Again, the experience of pupils with an EHCP may make it more likely that they apply later on (and thus not feature in their cohort data) – those who do progress often need to top up their level 2 or 3 qualifications before being able to progress to the next level of study, all of which takes time.

    But just 1.5 per cent of students with an EHCP, 327 students, progressed to a high tariff provider. To me, that’s a systemic failing.

    Regional dimensions

    More so than any other characteristic, where you live (and, more germanely, where you go to school) has a huge impact on your educational experience with SEN. In Kensington and Chelsea, 45.5 per cent of students with SEN are in HE by the age of 19. In Thurrock, the figure is more like 10 per cent.

    The variation is similar for all students – 71 per cent get to university in Redbridge, 26 per cent in Knowsley.

    [Full screen]

    But this core variation (which covers everything from socio-economic status to school quality to aspirations) is overlaid by the varying proportions of students with SEN in each area, and the varying levels (and quality) of the support that can be provided.

    [Full screen]

    Some 23.3 per cent of all students in Middlesbrough have a SEN marker. In Havering the figure is 8.85 per cent (there are some outliers with low numbers of students in total)

    What is being done?

    As Alex Grady of nasen wrote on the UCAS blog earlier this year, the many misconceptions around SEN indicating some form of “learning difficulty” that makes higher education irrelevant or impossible still persist. Students with SEN very often flourish at university, but the assumption that they will not attend higher education – so thinking around support through and beyond the transition between compulsory education and higher education often happens late or in a piecemeal fashion.

    It is comparatively rare for a university to visit a non-mainstream school, or vice-versa. There are many reasons (not least financial) for this not to happen, but there is a clear benefit to introducing students from all settings to a range of post-compulsory routes early and often. Sometimes special schools and other alternate provision partner with larger local schools to make this happen.

    Student records do not transition neatly between the compulsory sector and higher education, a situation not helped by the presumption that an EHCP extends to age 25 if you don’t go to university, but ends if they do (this, beautifully, is considered a “positive outcome”). A student may be used to assuming staff understand the best way to support them (as this is what happened at school) and feel uncomfortable or ill-equipped to effectively argue for similar support in HE.

    Universities do address this, both in highlighting the support that they offer students and in signposting what is available via the Disabled Students’ Allowance (many students with SEN do not identify themselves as “disabled”, and the variations in terminology are a recognised issue). But schools also have a role to play in preparing students for an application and choice experience that is pretty bewildering for all students.

    Additional data

    The DfE Widening Participation release is the only place where you get a definition of a “high tariff” provider – in 2023-24 this term referred to higher education providers with a mean tariff of 125.8 or above (last year this was 129.4).

    [Full screen]

    Source link

  • Do we still value original thought?

    Do we still value original thought?

    I have written the piece that you are now reading. But in the world of AI, what exactly does it mean to say that I’ve written it? 

    As someone who has either written or edited millions of words in my life, this question seems very important. 

    There are plenty of AI aids available to help me in my task. In fact, some are insinuating themselves into our everyday work without our explicit consent. For example, Microsoft inserted a ‘Copilot’ into Word, the programme I’m using. But I have disabled it. 

    I could also insert prompts into a service such as ChatGPT and ask it to write the piece itself. Or I could ask the chatbot direct questions and paste in the answers. Everybody who first encounters these services is amazed by what they can do. The ability to synthesise facts, arguments and ideas and express them in a desired style is truly extraordinary. So it’s possible that using chatbots would make my article more readable, or accurate or interesting.

    But in all these cases, I would be using, or perhaps paraphrasing, text that had been generated by a computer. And in my opinion, this would mean that I could no longer say that I had written it. And if that were the case, what would be the point of ‘writing’ the article and putting my name on it?

    Artificial intelligence is a real asset.

    There is no doubt that we benefit from AI, whether it is in faster access to information and services, safer transport, easier navigation, diagnostics and so on. 

    Rather than a revolution, the ever-increasing automation of human tasks seems a natural extension of the expansion of computing power that has been under way since the Second World War. Computers crunch data, find patterns and generate results that simulate those patterns. In general, this saves time and effort and enhances our lives.

    So at what point does the use of AI become worrying? To me, the answer is in the generation of content that purports to be created by specific humans but is in fact not. 

    The world of education is grappling with this issue. AI gathers information, orders and analyses it, and is able to answer questions about it, whether in papers or other ways. In other words, all the tasks that a student is supposed to perform! 

    At the simplest level, students can ask a computer to do the work and submit it as their own. Schools and universities have means to detect this, but there are also ways to avoid detection. 

    The human touch

    From my limited knowledge, text produced with the help of AI can seem sterile, distanced from both the ‘writer’ and the topic. In a word, dehumanised. And this is not surprising, because it is written by a robot. How is a teacher to grade a paper that seems to have been produced in this way?

    There is no point in moralising about this. The technologies cannot be un-invented. In fact, tech companies are investing hundreds of billions of dollars in vast amounts of additional computing power that will make robots ever more present in our lives. 

    So schools and universities will have to adjust. Some of the university websites that I’ve looked at are struggling to produce straightforward, coherent guidance for students. 

    The aim must be, on the one hand, to enable students to use all the available technologies to do their research, whether the goal is to write a first-year paper or a PhD thesis, and on the other hand to use their own brains to absorb and order their research, and to express their own analysis of it. They need to be able to think for themselves. 

    Methods to prove that they can do this might be to have hand-written exams, or to test them in viva voce interviews. Clearly, these would work for many students and many subjects, but not for all. On the assumption that all students are going to use AI for some of their tasks, the onus is on educational establishments to find new ways to make sure that students can absorb information and express their analysis on their own.

    Can bots break a news story?

    If schools and universities can’t do that, there would be no point in going to university at all. Obtaining a degree would have no meaning and people would be emerging from education without having learned how to use their brains.

    Another controversial area is my own former profession, journalism. Computers have subsumed many of the crafts that used to be involved in creating a newspaper. They can make the layouts, customise outputs, match images to content, and so on. 

    But only a human can spot what might be a hot political story, or describe the situation on the ground in Ukraine.  

    Journalists are right to be using AI for many purposes, for example to discover stories by analysing large sets of data. Meanwhile, more menial jobs involving statistics, such as writing up companies’ financial results and reporting on sports events, could be delegated to computers. But these stories might be boring and could miss newsworthy aspects, as well as the context and the atmosphere. Plus, does anybody actually want to read a story written by a robot? 

    Just like universities, serious media organisations are busy evolving AI policies so as to maintain a competitive edge and inform and entertain their target audiences, while ensuring credibility and transparency. This is all the more important when the dissemination of lies and fake images is so easy and prevalent. 

    Can AI replace an Ai Weiwei? 

    The creative arts are also vulnerable to AI-assisted abuse. It’s so easy to steal someone’s music, films, videos, books, indeed all types of creative content. Artists are right to appeal for legal protection. But effective regulation is going to be difficult.  

    There are good reasons, however, for people to regulate themselves. Yes, AI’s potential uses are amazing, even frightening. But it gets its material from trawling every possible type of content that it can via the internet. 

    That content is, by definition, second hand. The result of AI’s trawling of the internet is like a giant bowl of mush. Dip your spoon into it, and it will still be other people’s mush. 

    If you want to do something original, use your own brain to do it. If you don’t use your own intelligence and your own capabilities, they will wither away.

    And so I have done that. This piece may not be brilliant. But I wrote it.


     

    Questions to consider:

    1. If artificial intelligence writes a story or creates a piece of art, can that be considered original?

    2. How can journalists use artificial intelligence to better serve the public?

    3. In what ways to you think artificial intelligence is more helpful or harmful to professions like journalism and the arts?


     

    Source link

  • What we lose when AI replaces teachers

    What we lose when AI replaces teachers

    Key points:

    A colleague of ours recently attended an AI training where the opening slide featured a list of all the ways AI can revolutionize our classrooms. Grading was listed at the top. Sure, AI can grade papers in mere seconds, but should it?

    As one of our students, Jane, stated: “It has a rubric and can quantify it. It has benchmarks. But that is not what actually goes into writing.” Our students recognize that AI cannot replace the empathy and deep understanding that recognizes the growth, effort, and development of their voice. What concerns us most about grading our students’ written work with AI is the transformation of their audience from human to robot.

    If we teach our students throughout their writing lives that what the grading robot says matters most, then we are teaching them that their audience doesn’t matter. As Wyatt, another student, put it: “If you can use AI to grade me, I can use AI to write.” NCTE, in its position statements for Generative AI, reminds us that writing is a human act, not a mechanical one. Reducing it to automated scores undermines its value and teaches students, like Wyatt and Jane, that the only time we write is for a grade. That is a future of teaching writing we hope to never see.

    We need to pause when tech companies tout AI as the grader of student writing. This isn’t a question of capability. AI can score essays. It can be calibrated to rubrics. It can, as Jane

    said, provide students with encouragement and feedback specific to their developing skills. And we have no doubt it has the potential to make a teacher’s grading life easier. But just because we can outsource some educational functions to technology doesn’t mean we should.

    It is bad enough how many students already see their teacher as their only audience. Or worse, when students are writing for teachers who see their written work strictly through the lens of a rubric, their audience is limited to the rubric. Even those options are better than writing for a bot. Instead, let’s question how often our students write to a broader audience of their peers, parents, community, or a panel of judges for a writing contest. We need to reengage with writing as a process and implement AI as a guide or aide rather than a judge with the last word on an essay score.

    Our best foot forward is to put AI in its place. The use of AI in the writing process is better served in the developing stages of writing. AI is excellent as a guide for brainstorming. It can help in a variety of ways when a student is struggling and looking for five alternatives to their current ending or an idea for a metaphor. And if you or your students like AI’s grading feature, they can paste their work into a bot for feedback prior to handing it in as a final draft.

    We need to recognize that there are grave consequences if we let a bot do all the grading. As teachers, we should recognize bot grading for what it is: automated education. We can and should leave the promises of hundreds of essays graded in an hour for the standardized test providers. Our classrooms are alive with people who have stories to tell, arguments to make, and research to conduct. We see our students beyond the raw data of their work. We recognize that the poem our student has written for their sick grandparent might be a little flawed, but it matters a whole lot to the person writing it and to the person they are writing it for. We see the excitement or determination in our students’ eyes when they’ve chosen a research topic that is important to them. They want their cause to be known and understood by others, not processed and graded by a bot.

    The adoption of AI into education should be conducted with caution. Many educators are experimenting with using AI tools in thoughtful and student-centered ways. In a recent article, David Cutler describes his experience using an AI-assisted platform to provide feedback on his students’ essays. While Cutler found the tool surprisingly accurate and helpful, the true value lies in the feedback being used as part of the revision process. As this article reinforces, the role of a teacher is not just to grade, but to support and guide learning. When used intentionally (and we emphasize, as in-process feedback) AI can enhance that learning, but the final word, and the relationship behind it, must still come from a human being.

    When we hand over grading to AI, we risk handing over something much bigger–our students’ belief that their words matter and deserve an audience. Our students don’t write to impress a rubric, they write to be heard. And when we replace the reader with a robot, we risk teaching our students that their voices only matter to the machine. We need to let AI support the writing process, not define the product. Let it offer ideas, not deliver grades. When we use it at the right moments and for the right reasons, it can make us better teachers and help our students grow. But let’s never confuse efficiency with empathy. Or algorithms with understanding.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • ‘Everything, everywhere, all at once’: How Trump has upended higher ed finance in 2025

    ‘Everything, everywhere, all at once’: How Trump has upended higher ed finance in 2025

    NATIONAL HARBOR, Md. Liz Clark would have lost a bet on the massive Republican tax and spending bill passed and signed into law earlier this month. 

    Clark, the vice president for policy and research at the National Association of College and University Business Officers, said she didn’t expect the bill to be finalized until early fall. While only off by a few months, Clark’s missed guess illustrates just one of many unexpected developments for higher education — and the world — since President Donald Trump retook office in January. 

    Speaking at NACUBO’s annual conference near Washington, D.C., on Sunday, Clark pointed to more than a decade of divided governments, intraparty policy squabbles and political gridlock as Democrats and Republicans have traded thin majorities in Congress. 

    Based on that history, it might have seemed improbable that Republicans could swiftly move a massive policy package through two houses of Congress where they held razor-thin leads. But Republicans did, and Congress got a bill to Trump’s desk by the date he demanded. 

    “This is a quintessential moment in seeing that past performance is no indication of future results,” Clark said. 

    The bill has plenty of implications for college finance departments, not to mention students and all other stakeholders in higher education. And it’s just one of many policy sea changes the sector has seen since Trump and a Republican-led Congress came to office six short months ago. 

    From new taxes to new legal liabilities, below is a look at how politics and policy are impacting college finance offices. 

    A blitz of executive orders

    So far in his roughly six months in office, Trump has already issued more executive orders than Joe Biden did during his entire four-year term. According to data from the American Presidency Project, Trump is on pace to issue more orders per year than any other president in history, except potentially Franklin Roosevelt in his first term at the height of the Great Depression. 

    And several of those orders have cut to the heart of higher ed in the U.S., including orders targeting college diversity initiatives and seeking to revamp accreditation

    “Every president has tested the limits of executive power. This is not new,” Clark said. “What is new, at least for us, especially when it comes to issues impacting higher education, is the scope, the number of executive orders, the number of changes in law that are impacting your campuses.” She added, “We have, this year, been dealing with everything, everywhere, all at once.”

    Trump’s order on diversity, equity and inclusion programs has drawn rebukes, including through litigation, for being vague and potentially stifling to free speech and intellectual activity. 

    “DEI is not illegal,” Clark said, pointing specifically to the administration’s executive order on the topic. 

    College researchers, meanwhile, are being asked to certify their compliance with executive orders, including those related to DEI, when applying for grants. That can present a dicey situation when directives are vaguely worded. 

    In some cases, federal agencies have even asked researchers to certify compliance with all future executive orders that may be issued someday, noted Jen Gartner, deputy general counsel for University of Maryland, College Park, at a NACUBO conference panel Monday.

    “Obviously, we don’t know what we would be certifying compliance with,” Gartner said.

    Certification requirements for grants can vary by agency, but Gartner noted the one commonality is that they “now all mention that our certification is material for the False Claims Act.”

    The False Claims Act bars fraud in government contracting. Trump’s Department of Justice in May launched an initiative that threatens universities with investigations under the law over their DEI programs and policies for transgender students and athletes.    

    Source link

  • The accidental facility manager: Robert Alemany

    The accidental facility manager: Robert Alemany

    This audio is auto-generated. Please let us know if you have feedback.

    Relationship building is the most important skill Robert Alemany says he brings to his role as director of facilities at The Buckley School, a private K-9 school in New York City.

    Headshot of Robert Alemany, director of facilities at The Buckley School, NYC.

    Robert Alemany

    Permission granted by The Buckley School, NYC

     

    With a background in teaching and business management, he had to learn the technical side of facility management from scratch when he shifted to that line of work about five years ago. Yet it’s people management that he relies on most to succeed in his role overseeing a four-building campus of close to 400 students, he says.

    Among his big challenges this year is meeting the energy efficiency goals under New York City’s Local Law 97. One of the school’s older buildings needs upgrades if it’s to meet the city’s tough building performance standards. To learn about his career path and how he’s tackling the challenges that come with the job, Facilities Dive sat down with Alemany for a short conversation.

    The following Q&A has been edited for clarity and length. 

    FACILITIES DIVE: How did you become involved in facilities management?

    ROBERT ALEMANY: It was accidental. I started out teaching at public schools in different roles. I decided I didn’t want to be a teacher forever and got an MBA. I thought I’d go work in finance, but I kept getting these managerial roles at educational places, then got a job at a church in New York City. The church bought a parking garage, gutted it and turned it into this really contemporary facility. They decided that Monday through Saturday they would rent it out as space and just use it for church on Sundays. They used the income from the event space to pay off the mortgage. 

    I ran operations there for a while. They had somebody taking care of the building [who departed, then they] said, “Why don’t you step into this role?” I was like, “I really appreciate it, but I don’t know which way to hold a screwdriver, so I don’t think I’m your guy.” And they said, “We appreciate your management skills and the kind of person you are. You can get the skills to excel in this role,” and they gave me a shot. I started cold calling facilities managers in similar roles, getting books and doing online courses and just loved it. 

    A few years down the line, a position opened up at the school I’m at now and I thought it was a perfect fit; I had these newly acquired facility skills and I’d been a teacher. I know what they experience every day. That led me to where I am now.

    Talk about your training. 

    ALEMANY: I continue to build off what I learned and got my real estate license. I still talk to people in the field, invest in books and do online courses whenever I can. You just never stop talking to people. Even when we have vendors come in, I ask, “Why? What kind of stuff are you doing here?” 

    What’s the most important skill the job requires?

    ALEMANY: People management, which is not something that [typically] comes to mind. When people think of facilities, they think of pipes and HVAC systems and all that. But there are people that are involved in maintaining and repairing that equipment. So it’s about building relationships, especially in a city as busy as New York City. To get emergency service here, for example, relationships make a difference. We might not be on the top of the list of a company. We’re just a small fish in the pond. But by building relationships, we become a priority for others. And when you have to negotiate contracts and work through different things, people skills are at the top.

    Are there laws or regulations that cause you the most challenge? How do you manage compliance? 

    ALEMANY: Here in New York City, it’s Local Law 97 which is the law that’s governing carbon emissions. One of the buildings was last renovated in 1974. We’re fine for this year, but by 2030 we’re not going to meet the threshold that they’ve set. We’re going to be fined about $35,000 or something like that. So we’re undergoing a renovation project and engaging architects and some engineers to help us with that to reduce our carbon footprint. 

    As it relates to school buildings, some of it is health department stuff because we’ve got a nurse here. So, it’s making sure that chemical cleaners are away and out of reach of kids. We also face a lot of the regulations that other buildings do, but when they come in and do their inspections, they’re a lot stricter with us because we’re a school.

    What keeps you up at night? 

    ALEMANY: Just the idea that every facilities manager has: that something weird is going to go wrong, where you forgot if you did something or forgot to talk to someone. The unexpected. But that comes with the role. There’s a level of unpredictability, but it drives you to be as well organized as you can and think through things. Even if something doesn’t happen, you think through multiple steps and say, “Okay, if it did, what would I do? How would I respond?”  

    What advice do you have for someone who’s interested in becoming a facility manager?

    ALEMANY: Be humble, ask questions and learn from those around you. There are a lot of people that have done this before, and it’s not just facility managers, but those that have worked in the trades that love to share their passion for what they do. Don’t try to re-create the wheel. Absorb knowledge and create friendships and relationships. If anybody would want to pursue the career, I would encourage them to do it, because it really is a great career.  

    Did you have a unique journey into facilities management? Are you managing through challenges others could benefit from learning about? If you’re interested in sharing your story with others in your field, Facilities Dive would appreciate hearing from you. Send an email to [email protected]

    Correction: We have updated this story to correct the number of students at the school and clarify Alemany’s remarks about the age and condition of one building and his career path.

     

    Source link

  • What can be learned from Texas’ surge in uncertified teachers?

    What can be learned from Texas’ surge in uncertified teachers?

    This audio is auto-generated. Please let us know if you have feedback.

    Though it’s expected that teacher turnover will decrease over the next few years, it’s estimated that there were at least 49,000 vacant teaching positions and 400,000 underqualified educators instructing in classrooms nationwide during the 2024-25 school year, according to a project led by researchers from the University of Missouri and the University of Pittsburgh.

    Texas has one of the highest teacher underqualification rates in the country, according to the University of Missouri-University of Pittsburgh research project. 

    Between the 2019-20 and 2024-25 school years, the total number of uncertified teachers in Texas jumped from 12,900 to 42,100, the Texas Education Agency found. That means 12% of the state’s total teachers were uncertified by 2024-25 compared to 3.8% before the pandemic.

    On top of that, 34% of the nearly 49,200 newly hired teachers in the 2023-24 school year had no Texas teaching certifications, according to TEA data.

    Texas makes a change

    Texas’ growing reliance on uncertified teachers stems from the District of Innovation policy enacted by the state legislature in 2015. 

    Some 986 Texas school districts participate in the program, which essentially automatically allows them to waive teacher certifications even though it was initially intended just for career and technical education teachers, said Jacob Kirksey, an assistant professor of education policy at Texas Tech University. Kirksey is also the associate director of the university’s Center for Innovative Research in Change, Leadership and Education.   

    Since the pandemic, however, there has been a “dramatic spike” in districts using the District of Innovation program to help with hiring uncertified teachers for foundational subject areas, Kirksey said.

    According to Kirksey’s research, Texas’ use of uncertified teachers with no classroom experience led to major learning losses for students. Those taught by new uncertified educators lost 4 months in reading and 3 months in math compared to their peers taught by certified instructors.

    But a major shift is underway: HB2, a new state law enacted in June, will phase out all uncertified teachers in foundational content areas by the 2029-30 school year.  

    Now in Texas, Kirksey said, “we’ve seen the extent of the damage. I think our legislature realized, ‘OK, we created this hole, and now we need to put in the work to fix it.’”

    HB2 also incentivizes districts to hire high-quality teachers, he said. Under the new law, districts can receive $1,000 bonuses for every teacher they certify who previously lacked necessary credentials. Other larger bonuses can be earned from the state for mentoring and training teachers. 

    South Carolina embraces uncertified teachers

    On the flip side, some states are implementing laws that would allow more uncertified teachers to enter classrooms.

    In South Carolina, for instance, a law enacted in May launched a five-year pilot program that will allow public school districts to hire uncertified teachers — capped at 10% of a district’s instructional staff. 

    These teachers must have a bachelor’s or master’s degree with at least five years of relevant work experience in the subject area they are hired to instruct. Uncertified teachers must also enroll in an educator preparation program within their first three years of instruction. 

    The new law comes as South Carolina’s school districts reported over 1,000 teacher vacancies at the beginning of the 2024-25 school year — 600 fewer vacancies, or a 35% decrease, from the previous year, according to a November 2024 analysis by the Center for Educator Recruitment, Retention, and Advancement. 

    Dena Crews, president of the South Carolina Education Association, said she’s concerned that the law doesn’t require uncertified teachers to take any foundational training before they can instruct students. “The business world and the education world are not the same,” she said.

    Uncertified teachers also don’t have any incentive to stay in schools and won’t face any consequences if they quit in the middle of their contract, Crews said. For instance, if one of these teachers doesn’t know anything about classroom management and they can’t get students’ attention for days on end, they may not want to come back and teach.

    Source link

  • Northwestern University Announces Major Staff Cuts Amid Federal Funding Crisis

    Northwestern University Announces Major Staff Cuts Amid Federal Funding Crisis

    Northwestern University is moving forward with plans to eliminate more than 400 staff positions as it confronts significant financial challenges stemming from a $790 million federal funding freeze implemented by the Trump administration, according to multiple sources familiar with internal discussions.

    The cuts will affect staff across multiple schools within the university system, including the Weinberg College of Arts and Sciences and the McCormick School of Engineering. Administrators have begun notifying affected departments of the impending workforce reductions.

    In a university-wide communication released earlier this week, Northwestern leadership confirmed the elimination of approximately 425 positions throughout the institution. Half of these positions are currently vacant, while the remainder will result in actual job losses. The reductions are expected to decrease the university’s staff-related budget by roughly 5 percent.

    The administration characterized the decision as necessary to address what they termed a “significant budget gap” that cannot be resolved without reducing personnel expenses, which represent 56 percent of Northwestern’s total annual operating costs.

    Prior to implementing the staff reductions, university leadership directed schools and administrative units to approach the cuts strategically, with instructions to “think strategically about how to minimize the impacts to their units, our workforce, students, and the University.”

     

    Source link

  • Abrupt Pause, Unpause of Grants Doesn’t End NIH Funding Woe

    Abrupt Pause, Unpause of Grants Doesn’t End NIH Funding Woe

    The Tuesday night news quickly sowed alarm among researchers: Media outlets reported that the Trump administration had stopped the National Institutes of Health from funding any new grants. The Wall Street Journal wrote that “certain grants that are up for renewal” were also cut off, and STAT, along with other outlets, later confirmed that reporting.

    The newspaper reported that the Office of Management and Budget was blocking these billions of dollars in research funding for the rest of the fiscal year, which ends Sept. 30. After that, the dollars would return, unspent, to the Treasury. This nationwide halt to grants stemmed from an OMB footnote in a budget document, the Journal reported, adding that “the fourth quarter of the fiscal year is typically the busiest for grant-giving institutes at the NIH.”

    Inside Higher Ed reviewed screenshots of an email from an NIH employee saying, “Research grant, R&D contract, or training awards cannot be issued during this pause.” The funding halt would’ve meant an end to new research to help find and improve cures and treatments for diseases as well as stanched the flow of federal dollars to already financially beleaguered universities and labs nationwide.

    “This is undeniably an unforced error, since this will not only harm current and future American patients, but the disruptive and chilling effect of this sudden holding back of promised funds will further jeopardize the future of the American medical research enterprise,” Association of American Universities president Barbara R. Snyder said a statement Tuesday.

    But before the night was over, the Trump administration appeared to reverse course. In an updated article citing unnamed sources, the Journal reported that unnamed “senior White House officials intervened.” (OMB is part of the executive branch.) The Journal said officials at the Health and Human Services Department, which includes NIH, fought the pause for days, but OMB only relented after the newspaper published its initial story Tuesday.

    In response to Inside Higher Ed’s written questions and interview requests about the situation Wednesday, the White House and HHS both sent the same statement from an HHS spokesperson: “The programmatic review is over. The funds are out.”

    One OMB spokesperson posted on X that OMB had been “waiting for more information from NIH” before releasing the funds.

    The NIH is one of the largest sources of funding for research at colleges and universities, and it touts itself as the “largest single public funder of biomedical and behavioral research in the world.” Tuesday night’s controversy wasn’t the first—and likely won’t be the last—upheaval that this crucial agency has faced under the Trump administration.

    From grant cancellations to the White House proposal to slash the agency’s budget by 40 percent for the next fiscal year, institutions and researchers have seen the flow of NIH grant money stymied. Atop all this, the reportedly now-abandoned move by OMB to stop grant awards highlights continuing concerns about the fate of the grant dollars that the NIH still hasn’t given out this current fiscal year.

    Since Trump took office, the NIH has awarded fewer grants compared to previous years, multiple analyses have found. A former NIH official estimated to Science that at least $6 billion of the agency’s $48 billion budget could be sent back. In a higher estimate, Sen. Patty Murray, the top Democrat on the Senate Appropriations Committee, said in a statement that what OMB reportedly tried to do before reversing course Tuesday “would choke off approximately $15 billion in funding that would otherwise go to institutions across the country.”

    A nongovernment official familiar with the NIH appropriations process told Inside Higher Ed that, within a sample of major universities surveyed, institutions are down 20 to 48 percent in NIH award and renewal funds compared to the same time last year.

    The official, who requested anonymity to maintain relationships with people within the administration, said Wednesday that there’s been “a very, very slow spend at NIH, even prior to last night’s fire drill.” The official said they don’t think NIH has ever had to push out so much remaining money in such a short time, and there’s “a very small amount of NIH staff left to allocate those funds.”

    Heather Pierce, senior director for science policy at the Association of American Medical Colleges, told Inside Higher Ed that Tuesday’s news “caused a real concern across the research enterprise very quickly. This is a community that has seen not just threats but actual damaging changes to the typically stable federally funded research grants take place overnight, or even faster.

    “By any measure, the pace of grant funding is a fraction of what it has been in any other year, and that includes grant renewals, that includes new funding opportunities,“ Pierce added. “And the pace with which grant applications are reviewed and awarded is far below what we’ve seen in the past, and that includes applications that were submitted a long time ago that have already been scored and gotten very competitive scores that would be expected to be funded.”

    Joanne Padrón Carney, chief government relations officer for the American Association for the Advancement of Science, said the reported freeze “just reinforced the current mood among researchers that the future of scientific research at NIH is still in question and could change at a moment’s notice, but also that this isn’t just about NIH. This cloud of uncertainty hovers over other agencies as well, such as the National Science Foundation.”

    Carney added that “the head of the Office of Management and Budget has made public his interest in reducing spending and reducing the size of government and using what tools that he is able to use to do that.”

    Russell Vought, head of OMB, hasn’t sworn off using rescission legislation, which can be passed with a simple majority in both chambers of Congress, to take back already appropriated funds during a fiscal year. NPR also reported that he’s called Congress’s spending bills “a ceiling … not a floor.”

    Murray, who represents Washington State, previously warned that the Trump administration’s use of such legislation to claw back funds already appropriated for this fiscal year—like it recently did for public broadcasting money—could scuttle consensus on the budget for next fiscal year.

    Carney attributed the slowdown in NIH grants to multiple factors, including the regular change in presidential administrations, Congress adopting a continuing resolution instead of a budget for this fiscal year and the Trump administration’s executive orders and other actions.

    “It’s like throwing sand into the machine,” Carney said. She said her association is pleased “that the funding will continue to flow, but it’s still unknown whether that flow of funds will be in drips or will be full stream, and we only have two months left until the end of the fiscal year.”

    Some Senate Republicans recently called on NIH and OMB to send more money out the door, as directed in the continuing resolution Congress passed in March.

    “We are concerned by the slow disbursement rate of [fiscal year 2025] NIH funds, as it risks undermining critical research and the thousands of American jobs it supports,” the senators wrote in a letter to OMB. “Suspension of these appropriated funds—whether formally withheld or functionally delayed—could threaten Americans’ ability to access better treatments and limit our nation’s leadership in biomedical science. It also risks inadvertently severing ongoing NIH-funded research prior to actionable results.”

    Tuesday night’s controversy came as some Republican members of Congress have joined Democrats in opposing the president’s proposal to gut the NIH’s funding for fiscal 2026. The Senate Appropriations Committee is meeting today, and it’s set to unveil how much it plans to send NIH next fiscal year.

    Carney said, “The U.S. is considered a global leader in biomedical research and medical discoveries, and we can’t afford to lose opportunities for advancing new discoveries and therapies and treatments for diseases that affect millions all over the world.

    “So when it comes to Alzheimer’s or cancer or infectious diseases, this is about hope,” she said. “It shouldn’t be about politics.”

    Source link

  • VP Online Enrollment, Integrated Marketing Solutions, Carnegie

    VP Online Enrollment, Integrated Marketing Solutions, Carnegie

    The last time we caught up with Shankar Prasad, he was telling us about his new role as chief strategy officer at Carnegie. Shankar reached out, saying that he is recruiting for the key role of Carnegie’s VP of online enrollment and integrated marketing solutions. As I’m on the lookout to share information with our community about roles at the intersection of learning, technology and higher education change, this job seemed perfect. Shankar graciously agreed to answer my questions about the role.

    Q: What is the mandate behind this role? How does it help align with and advance the company’s strategic priorities?

    A: Carnegie’s Online Program Experience (OPX) business line is an important growth area. The company aims to be the premier provider of integrated marketing and enrollment solutions for online programs. The mandate of the VP of online enrollment and integrated marketing solutions is to build and own the sales plan for this OPX business, drive revenue growth, and ensure that Carnegie’s full suite of services (research, strategy, digital marketing, lead generation, creative and website development) are successfully cross‑sold to new and existing clients.

    The job description states that the VP will “lead our sales strategy and execution to achieve our revenue targets,” shape the OPX growth strategy, and establish Carnegie as the premier provider of online program solutions in higher education. To do this, the VP must create the OPX sales plan, drive sales, meet goals and targets, and deliver growth through new clients and client‑expansion opportunities across Carnegie’s entire suite of services.

    This work aligns closely with Carnegie’s strategic priorities. The company positions itself as a leader in higher education marketing and enrollment strategy and emphasizes human‑centered, data‑driven solutions. By spearheading integrated marketing and enrollment solutions for online programs, the VP advances this mission—ensuring that Carnegie’s OPX offerings evolve with market trends, deliver measurable results and reinforce the organization’s leadership position. The role also requires thought leadership, cross‑team collaboration and partnerships, which support Carnegie’s focus on innovation and authentic human connections

    Q: Where does the role sit within the company’s structure? How will the person in this role engage with other units and leaders across the company?

    A: The VP of online enrollment and integrated marketing solutions is Carnegie’s leader of integrated sales for OPX. The position sits within the company’s growth and revenue organization and is accountable for the sales plan, revenue forecasting and team performance. The description notes that the VP “owns the development of all sales pursuits related to OPX” and partners closely with the SVP of marketing and the chief growth officer to develop messaging, positioning and proposals. This indicates that the role reports into or collaborates with senior leadership on growth strategy and marketing alignment.

    The role is highly cross‑functional. It requires partnering with marketing and business development to support inbound and new business pursuits and providing training and support to sales representatives in those divisions. The VP must collaborate with leaders of all business units to share feedback and optimize the OPX solution for clients.

    Day to day, the person will work with colleagues in sales, account management, production, senior strategists, client success, executive sales and enrollment strategy. They will also work with growth team members to craft proposals and coordinate with the marketing leader on business development materials and events. Additionally, the VP manages OPX revenue forecasting and ensures visibility across all accountable parties. This matrixed engagement means the VP acts as a connector between sales, marketing, product and leadership, ensuring that OPX solutions are delivered seamlessly and that market feedback informs strategic decisions.

    Q: What would success look like in one year? Three years? Beyond?

    A: In the first 12 months, success would involve laying the groundwork for a high-performing OPX sales organization. The VP should build and execute a sales plan, recruit or train a team, and cultivate strong relationships with marketing, business development and other unit leaders. Key milestones would include securing new OPX clients and expanding revenue from existing accounts, delivering on initial sales goals, instituting accurate revenue forecasting and establishing Carnegie as a respected thought leader at conferences and webinars.

    Three years: By year three, the VP should have turned OPX into a mature, scalable business line. The sales plan would be continuously optimized based on market feedback and the team would be driving sustained revenue growth across Carnegie’s services. Market penetration should be evident through a diversified client base, with high renewal and upsell rates. The VP should have built a strong network of external relationships and should be contributing to product evolution by monitoring industry trends and competitor activity. Measurable outcomes might include year‑over‑year revenue growth outpacing the market, higher average contract values and expanded partnerships or acquisitions that enhance the OPX offering.

    Beyond (five-plus years): Over a longer horizon, success would mean that the OPX division is a significant growth engine for Carnegie and a well‑recognized market leader. The VP will have built a resilient, data‑driven sales organization capable of adapting to changes in the higher education landscape. They may spearhead new offerings or strategic acquisitions and could play a central role in broader company leadership. The division’s revenue contribution might warrant further expansion into related services or international markets, ensuring Carnegie remains at the forefront of online program marketing and enrollment strategy.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: The VP of online enrollment and integrated marketing solutions oversees sales strategy, team leadership, revenue forecasting and cross‑functional collaboration. With 10-plus years of experience required in higher education enrollment and marketing for online programs, the role prepares someone for broader executive positions. Potential future roles could include:

    • Chief growth officer or chief revenue officer, because the VP manages revenue planning, sales execution and cross‑unit coordination.
    • General manager or president of a business unit, given the experience in developing a business line, building teams and driving profitability.
    • Chief marketing officer or chief commercial officer: The position demands collaboration with marketing leadership and deep knowledge of enrollment strategy.
    • Consulting or strategic advisory roles in higher education marketing and enrollment strategy, leveraging expertise in market trends, client relationships and integrated solutions.
    • Entrepreneurial leadership roles within the higher ed technology and services space, capitalizing on the growth mindset, executive presence and strategic thinking emphasized in the qualifications.

    By leading a high‑growth, cross‑disciplinary sales organization, the VP will develop a skill set that translates to senior leadership roles not only within Carnegie but across the broader higher education services sector.

    Source link