Tag: real

  • The Real Cost of IT Inaction in Higher Ed

    The Real Cost of IT Inaction in Higher Ed

    Technology expectations in higher education have never been higher. Students expect seamless digital experiences, faculty rely on stable, integrated systems to teach and conduct research, and institutional leaders need real-time data to make informed decisions.

    Yet many colleges and universities remain stuck, held back by aging infrastructure, limited budgets, or the belief that maintaining the status quo is safer than change.

    From where I sit, that belief is one of the most expensive misconceptions in higher ed today.

    IT inaction isn’t neutral. Standing still doesn’t preserve resources; it quietly drains them. Over time, those costs compound in ways that are harder to see, harder to control, and far more disruptive than proactive modernization.

    The hidden costs of doing nothing

    When institutions delay IT investment, the consequences rarely show up as a single line item. Instead, they surface as inefficiencies spread across budgets, teams, and timelines.

    Legacy systems are a prime example. Redundant platforms often require duplicated effort, separate maintenance contracts, and manual reconciliation between systems that should be integrated.

    Hardware that’s past its lifecycle can lead to unexpected outages and emergency spending that exceeds planned budgets. Older systems also demand specialized support, which is increasingly difficult and expensive to find as vendors phase out end-of-life technology.

    What’s most costly, though, is time.

    IT teams spend countless hours keeping outdated systems afloat by troubleshooting avoidable issues, applying workarounds, and responding to preventable failures. That’s time not being spent on strategic initiatives that improve efficiency, student experience, or institutional resilience.

    I often describe it this way: Maintaining legacy systems is like pouring money into a leaky boat just to stay afloat, not to move forward.

    Security vulnerabilities and reputational risk

    When it comes to cybersecurity, the cost of inaction is especially serious.

    Legacy systems that lack consistent monitoring pose a heightened security risk. Outdated software, fragmented technology environments, and limited visibility create prime opportunities for cyberattacks — particularly for institutions that handle sensitive student, faculty, and financial data.

    Compliance becomes more difficult in these conditions. Meeting FERPA, HIPAA, and other regulatory requirements is far more complex when systems aren’t integrated or consistently managed. Non-compliance doesn’t just carry financial penalties. It can threaten accreditation and erode institutional trust.

    The fallout of a breach extends well beyond remediation costs. Reputational damage can deter prospective students, strain donor relationships, and take years to repair.

    Simply put, institutions don’t want to make headlines because of a cybersecurity lapse they could have prevented.

    Ready for a Smarter Way Forward?

    Higher ed is hard — but you don’t have to figure it out alone. We can help you transform challenges into opportunities.

    Missed opportunities for strategic growth

    IT inaction doesn’t just introduce risk. It actively limits growth.

    Students move seamlessly across digital platforms in every part of their lives. When institutional systems don’t integrate, the student experience becomes fragmented, support slows down, faculty shoulder unnecessary administrative burdens, and leaders lose the data visibility needed to intervene early or plan strategically.

    I’ve seen institutions stuck on legacy SIS infrastructure that prevents modern integrations altogether. The result is manual reporting, delayed insights, and staff hours spent pulling data instead of using it.

    Outdated environments also restrict access to emerging technologies like AI, automation, and advanced analytics. These are tools that could drive efficiency, personalize engagement, and support enrollment and retention strategies. Without a scalable IT foundation, even well-intentioned growth initiatives increase cost and complexity instead of reducing them.

    IT staff burnout and talent drain

    The impact of chronic IT underinvestment is deeply human.

    Internal IT teams in under-resourced environments operate almost entirely in reactive mode. They’re constantly firefighting by responding to outages, security alerts, and system failures, all while knowing the underlying risks remain unresolved.

    That’s exhausting, and over time, it erodes morale.

    Talented IT professionals want to innovate. They want to build, improve, and contribute strategically. When their work is limited to keeping aging systems alive, frustration builds, and burnout follows. Eventually, institutions lose people they can’t easily replace.

    Recruitment becomes harder as well. Prospective hires can quickly identify an organization with no clear IT roadmap. They understand what that environment demands, and many choose to look elsewhere.

    This is where managed IT support can fundamentally change the equation.

    By shifting routine monitoring, maintenance, and after-hours support to a trusted partner, institutions reduce daily stressors on internal teams. Proactive management prevents crises before they escalate. Internal staff regain the capacity to focus on strategy, innovation, and meaningful institutional impact.

    Inaction is a choice (an expensive one)

    One of the biggest misconceptions I hear from higher ed leaders is that modernizing IT is too expensive, too complex, or too disruptive.

    The reality is that institutions are already paying for IT. They’re just paying in less visible and far less controlled ways. They’re paying through staff turnover, downtime, security exposure, and through leadership time spent managing exceptions instead of advancing strategy.

    Modern IT investment isn’t about chasing the latest technology. It’s about stabilizing operations, reducing risk, and making costs predictable. It’s a decision about institutional capacity, long-term resilience, and the people who make both possible.

    If I had 60 seconds with a higher ed president or CFO, I’d say this: The decision isn’t whether you’re spending on IT. That spend is already happening. The real question is whether you want it to be controlled and strategic, or hidden and reactive.

    Moving forward with confidence

    Higher education is navigating unprecedented change. Institutions that succeed won’t be the ones that avoid investment. They’ll be the ones that built strong, flexible foundations capable of supporting their mission long-term.

    If your institution is feeling the strain of outdated systems or reactive IT, now is the time to act. Collegis partners with colleges and universities to stabilize operations, reduce risk, and build IT environments designed for what’s next through our Managed IT Services for higher education.

    Innovation Starts Here

    Higher ed is evolving — don’t get left behind. Explore how Collegis can help your institution thrive.

    Source link

  • Now the struggle is no longer real, are students becoming stupid?

    Now the struggle is no longer real, are students becoming stupid?

    I used to take copious notes.

    In meetings, encounters, on Zoom calls, even when travelling – including, in a former life, when behind the wheel – I have pages of the stuff up in boxes up in the attic. In fact, not having a uniball pen and an A5-sized ringbound pad to hand would often cause me significant anxiety.

    Over time, I’d rationalised why. Writing notes, they said, requires selective attention. You can’t transcribe everything in real time, so I was forced to decide what matters, paraphrase it, and organise it.

    That process increases semantic processing, which strengthens memory traces even if the notes are never revisited. So the benefit came from the cognitive work of filtering, compressing, and structuring information, not from the artefact produced.

    I’d doodle too – something something undiagnosed ADHD something something. But externalising fragments of information, I’d tell myself, whether as words, symbols, arrows, or shapes, reduced my cognitive load.

    It freed my capacity to process relationships, implications, and meaning while the discussion continued. Even when my notes were messy or incomplete, the act of offloading stabilised my attention and comprehension in the moment.

    But then the other day, when someone in the team explained how they used their electronic device to take notes – but often never looked at them again – it dawned on me that I don’t, any more.

    I tried. I own any number of e-pens and tablets and gadgets that allow me to. But I never clicked with any of them, and many now join the ringbound scrawls that I somehow can’t let go of in the box in the corner.

    From time to time, I’ll audio record the encounters I’m in and use a large language model (LLM) to summarise actions, or recall detail. Sometimes, I’ll flit between noting things I need to do on a task manager app, or on a Google doc, or even on a real life post-it note attached to the monitor.

    But I no longer take notes. I’m not that person anymore. Am I becoming stupid?

    Guinea pigs in Budapest

    There’s a cracking story from last autumn that surrounds a group of students and researchers from Corvinus University in Budapest. In the early days of generative AI, the concerns were mainly about the way in which the tools could be used to produce things – and in a culture where continuous assessment relies upon the digital asset being produced being asynchronously graded as a symbol of a student’s learning, the instant and obvious problem was whether “they produced it”.

    But in Hungary, academics had noticed that polling and focus groups had started to surface a deeper reliance on AI – not just to write up the report or complete the essay, but all the other bits too – the research, the reading, the synthesis, the exploration that the write up was designed, on one proxy level or another, to demonstrate.

    They wondered whether what was starting to look like reliance was affecting their motivation, their genuine understanding of the material, and the extent to which it was substituting for the process of knowledge acquisition. To interrogate the impact on learning outcomes, an experiment was created. In an operations research module, students were randomly placed into two groups – where one was permitted to use AI tools during both teaching episodes and examinations, and the other wasn’t.

    Anticipating objections and to make it fair, they’d even ensured that a compensation mechanism would kick in – students in the lower-performing group would receive grading adjustments until average performance across the two groups was equalised. But despite the academics’ best efforts to explain the design and create a level-ish playing field for all participants, students – many of whom had an eye on the relationship between exam results and access to scholarships – were furious. One told news portal Telex:

    I really don’t think it’s fair, it’s quite absurd that some people can use AI in the exam and others can’t, and the results are on the same scale. This way, they don’t measure knowledge, but who is in which group – and I think this is fucking unfair.”

    And even though the experiment had been approved by every relevant bit of the university’s governance – the ethics board, the head of department, the programme director, and the Student Council – they were able to get their concerns first into the media, then the Office of the Commissioner for Fundamental Rights, and eventually to the minister for Culture and Innovation.

    Their instinct was that student reaction was more revealing than the data – AI tools have already become so embedded in how students work that removing them felt not like a fair test, but like a punishment. The experiment was duly halted – much to the frustration of associate professor Balázs Sziklai:

    …it would be important for the decision-making and legislative bodies to take an encouraging approach to research into the role of AI in education. Student experiments are essential for us to understand exactly what effects are taking place.

    Not all was lost. Even though the control group also ended up being permitted to use AI – removing the basis for a clean comparison – according to Sziklai –

    …it can be stated with complete certainty that the students did not master any part of the curriculum.

    In his view, the knowledge that they were working with a safety net had killed all motivation, self-confidence, and curiosity. That’s partly because students hadn’t just coasted through preparation – they had stopped paying any attention to the answers they were giving in the exam itself:

    They uncritically copied the AI’s answers, even if they were clearly stupid. If the language model suggested two separate solutions, then they would definitely copy both, saying one would definitely be good.”

    In the first twenty minutes of the exam – set up as easy true-false questions, answered offline, with no devices – the average score was 53 per cent. In the second part, where students tackled tasks similar to those practised in class but with AI tools permitted, the average jumped to 75 per cent.

    Sziklai and his team are continuing the research – running focus groups to understand how students experienced the experiment and what they think about AI’s role in their education. And Corvinus issued the sort of statement that universities issue – committed to examining challenges and opportunities, supporting research that provides equal opportunities, shaping examination conditions in a modern and ethical way, and so on.

    But was the conclusion that Sziklai drew the slam dunk that he thought it was? Were his students becoming stupid?

    The right kind of hard

    Over in the US, researchers working with kids in schools had been asking similar questions. In one study, high school maths students were randomly split into groups – one given access to ChatGPT during practice sessions, one given a version of ChatGPT that had been prompted to act as a tutor and refuse to give direct answers, and a control group with no AI access at all.

    The pattern in the first group would have been familiar to Sziklai. Students used the tool as a crutch – performance improved in the short term, but when it was taken away, they performed worse than those who’d never had access at all. They had worse long-term retention, and worse independent problem-solving. The tool had helped them get through the work without them having to do the work.

    But in the second group – the one given the tutor version, the one that had been pushed to recall and problem-solve rather than handed answers – something else happened. That saw nearly double the short-term gains, without the long-term drop-off. The researchers had only tested students after a single practice session with a fairly simple tutoring prompt – raising the question of what sustained exposure to a better-designed tool might achieve.

    A report published by US education nonprofit Bellwether in June 2025 uses the study – and a growing body of others like it – to argue that the question “is AI good or bad for learning?” is the wrong one. The right question, they suggest, is this. When does ease enable deeper learning, and when is ease a shortcut with a hidden cost?

    Their answer draws on decades of cognitive science around what researchers variously call “productive struggle,” “desirable difficulty,” or the “zone of proximal development.” The core idea is that effort only enhances learning when it sits in the right zone – hard enough to require genuine cognitive work, but not so hard that the student disengages.

    Get the calibration right and you trigger a virtuous cycle – memory encoding, sustained attention, intrinsic motivation, and the metacognitive skills that allow students to monitor their own understanding. Get it wrong – in either direction – and the learning doesn’t happen.

    The framework casts the Corvinus results in a different light. The problem wasn’t just that students had access to AI – it was that the conditions under which they were using it gave them no reason to struggle at all. Why wrestle with a problem when the tool will hand you an answer and the grading system will catch you if you’re wrong? The safety net didn’t just reduce difficulty – it removed the relationship between effort and outcome entirely.

    In other words, what Sziklai observed may have been less about AI destroying the capacity to learn than about it destroying the motivation to bother – which is a different, if related, problem. And the authors argue that this distinction runs through everything – not a technology problem, but a design problem.

    The same tool that turns one student into a passive copier can turn another into a more curious, more persistent thinker – depending on how it’s built, how it’s introduced, and what the student is being asked to do with it. And if students are responding rationally to a badly designed assessment, then the question isn’t whether AI makes people stupid. It’s whether it renders their teachers stupid.

    Better products, worse people

    When students offload not just the task, but the thinking about the task – the planning, the monitoring, the self-correction, the internal process of figuring out what you know, what you don’t, and what to do about the gap – that’s what everyone’s starting to call “metacognitive laziness”.

    In one study, undergraduates who used AI to research a topic experienced lower cognitive load than those who used a traditional search engine. For some, that sounds like a good thing – but the quality of their arguments was worse. The researchers suspected that because the process felt easier, students simply hadn’t engaged in the deeper processing that the harder route had forced upon them. The effort wasn’t a bug in the old method. It was the mechanism.

    Another study on how university students interacted with an AI tool found that almost half of conversations were “direct” – students were looking for answers with minimal engagement, no wrestling, no iteration, no back-and-forth, just “give me the thing”.

    When second-language learners given ChatGPT support for a writing task were assessed, the ChatGPT group produced better essays – but showed no significant difference in actual knowledge gain or transfer. The tool had improved the product without improving the person.

    When students in a brainstorming study were given AI support, they rated the task as requiring less effort. They also rated it as significantly less enjoyable and less valuable – even when independent raters judged the outputs to be better. In another study, stories written with AI-generated ideas were rated as more creative, better written, and more enjoyable to read – but were also more similar to each other. Individual quality went up – but collective novelty went down.

    The work got easier, and the product got better. But something – the satisfaction, the meaning, the sense of ownership – got worse. If the process is where the learning lives, and if the process is also where the meaning lives, then optimising for output is the problem. It might be hollowing out the thing that made the activity worthwhile in the first place.

    Of course, much of what education systems measure, and much of what employers reward, is output. The essay, the exam score, the report, the “deliverable”. The Bellwether authors acknowledge the tension – they argue that educational goals need to shift toward meaning-making, critical discernment, and the ability to sustain effort amid complexity, and that “as AI takes on more of the routine, the passable bar for what humans contribute will rise.”

    But they stop short of the harder claim. If the tool is always going to be there – if it’s as permanent and ambient as the search engine already is – then the expectation that students will carry large volumes of subject knowledge around in their heads starts to look less like a standard and more like an anachronism.

    The 53 per cent on the Corvinus paper test is a scandal if you think people should be able to answer those questions unaided. It’s an inevitability if you think they’ll never have to. The skill that matters in that world isn’t knowing the answer – it’s being able to tell when the tool is giving you something stupid.

    But is higher education really ready to rebuild, and work, around the admission that knowing things might matter less than knowing what to do when you don’t?

    Cognitive debt

    Over at MIT, researchers had been asking a version of the same question – but with electrodes. They recruited 54 participants, split them into three groups, and asked them to write essays. One group used an LLM, one used a search engine, and one used nothing at all. Each completed three sessions under the same condition – and then, in a fourth session, some were switched. LLM users were told to write unaided, and unaided writers were given the tool.

    The results were astonishing. Brain-only writers showed the strongest and most distributed “neural connectivity” – the broadest, most active networks. Search engine users showed moderate engagement. But LLM users showed the weakest – cognitive activity didn’t just correlate with tool use – it scaled down in direct proportion to it. The more the tool did, the less the brain did.

    But it was the switchover that should give us pause. When LLM users were asked to write without the tool, they didn’t just find it harder. Their brains showed reduced connectivity in the regions associated with sustained attention and memory – not struggling to do the work, but under-engaged, as if the neural architecture for doing it had somehow downgraded. Four months of LLM-assisted writing hadn’t just let them avoid the cognitive effort – it appeared to have changed what their brains were ready to do.

    The researchers call it “cognitive debt” – a cost that accumulates invisibly and comes due later. And there was one more finding that connects back to the brainstorming studies. LLM users reported the lowest sense of ownership over their essays, and when asked to recall what they’d written, struggled to accurately quote their own work. They hadn’t just outsourced the effort – they’d outsourced the experience of having done it. The full study is available on arXiv.

    Working with wizards

    Ethan Mollick, the University of Pennsylvania professor, has been tracking the shift with increasing unease. In a September 2025 essay, he argued that the relationship between humans and AI is moving from what he called “co-intelligence” – where you collaborate with the tool, check its work, guide it – to something more like working with a wizard. You make a vague request, something impressive comes back, but you have no idea how it was made, and limited ability to verify whether it’s right.

    His question for educators was blunt:

    How do you train someone to verify work in fields they haven’t mastered, when the AI itself prevents them from developing mastery?

    It’s an almost perfect paradox. The skill you most need – judgment about whether the output is any good – is the skill that requires the domain knowledge that the tool has just made it unnecessary to acquire. Mollick’s answer, such as it is, was that we need to become “connoisseurs of output rather than process” – developing instincts, through extensive use, for when the tool succeeds and when it fails. It may be true, but it isn’t yet a curriculum.

    Meanwhile, the bleaker readings of what’s left when you strip out everything that AI can do are piling up. Tyler Cowen’s provocative suggestion – that the university will persist mainly as “a dating service, a way of leaving the house, and a chance to party and go see some football games” – is usually quoted as a punchline.

    But it deserves more serious attention than it gets. If the knowledge-transmission function is dead and the credentialling function is weakening, then what universities actually provide is structure, socialisation, proximity to peers, and a reason to leave the house for three years during a critical developmental window. That might sound like a downgrade. It might also be an honest description of something genuinely important – the relational and developmental architecture that no tool, however capable, can replicate. The problem is that “we’re a really expensive way of helping young people grow up” is a difficult line to put in a prospectus.

    A piece in Frontiers in Education tried to frame it more ambitiously – arguing that the enduring value of higher education lies in “epistemic judgment, belonging, and wonder.” That’s a lovely sentence. It’s also aspirational rather than operational. Nobody has a validated pedagogy for “wonder”, and there is no module description for belonging. They are the words universities reach for when they sense that the old justifications are collapsing, but haven’t yet built the new ones.

    Howard Gardner – the multiple intelligences theorist – went further at a Harvard forum last autumn, suggesting that by 2050, most cognitive aspects of mind will be done so well by machines that “whether we do them as humans will be optional.” What survives, in his view, is the respectful mind and the ethical mind – how we treat other people, and how we handle difficult questions as citizens and professionals.

    His model is radical – a few years of basics, then teacher-coaches that guide students toward activities that challenge their thinking and expose them to ideas. It’s compelling, but it’s also extremely difficult to fund, politically almost impossible to sell, and structurally incompatible with everything from student loans to league tables to quality assurance frameworks.

    What should they become?

    What almost nobody is connecting is the productive struggle research – which is increasingly robust – to the purpose question. The cognitive science tells us, with growing confidence, that effort in the right zone builds the architecture for memory, attention, motivation, and metacognition. But architecture for what? If it’s architecture for carrying knowledge around in your head, and carrying knowledge around in your head is becoming a commodity, then we’re building capacity for something whose value is declining. The literature knows this is a problem – but doesn’t solve it.

    The conservative position – articulated by people like Robert Pondiscio at the American Enterprise Institute – is to hold the line. “Developing judgment is the entire point of education,” he argues, and AI “takes judgment out of the loop.” Where previous technologies automated low-level skills, AI automates higher-order thinking – “the very mental operations that define a well-educated person.”

    You can’t just move up the Bloom’s hierarchy when the tool is already sitting at the top of it. His solution is to basically resist. Don’t let the tool replace the struggle – education is transformation through effort, full stop. It’s coherent and comforting, but it’s also a defence of a model that is already losing – because the tool is here, and it isn’t going away, and telling students not to use it has roughly the same success rate as telling them not to use their phones.

    Others acknowledge the tension without resolving it. Jason Gulya, writing in the Chronicle of Higher Education’s AI forum, argues that “we’ll need to chip away at the transactional model of education and put learning – with all of the productive struggle and inefficiency it often involves – at the centre.” Which is fine, as far as it goes. But “put learning at the centre” is an easy sentence to write in an education policy document. It has not, historically, been enough.

    So I just ask again – what does productive struggle look like when the purpose of higher education is no longer knowledge transmission? Not struggle in the service of memorising content that a machine can retrieve instantly, and not struggle as a proxy for discipline or grit or moral seriousness. But struggle as the deliberate, designed, scaffolded process by which people learn to notice what they don’t know, to interrogate what they’re told, to hold complexity without reaching for a shortcut, and to recognise when a confident, fluent answer is wrong – especially when it’s being delivered by something that never hesitates and never says “I’m not sure.”

    If we started there – if that were the organising question, rather than “how do we stop them using ChatGPT on the essay” – we might get somewhere. The maths tutor study already showed it’s possible. The tool that refused to give the answer and forced the student to think produced nearly double the learning gains without the long-term drop-off. The productive struggle didn’t have to disappear – but it did have to be deliberately reintroduced – by the tool itself, which feels like a strange inversion. The thing that makes you lazy can also be the thing that refuses to let you be lazy, if someone decides to build it that way.

    But that requires knowing what you’re building it for. It requires a much deeper focus on teaching – and an acceptance that doing it will involve teachers building or adapting the tools they fear.

    We’re doing our bit to try to find out. In all that we’ve read, few seem to be asking students what they think they learned from a given AI interaction and taking the answers seriously as data about a potentially new form of learning, rather than as evidence of success or failure against pre-AI benchmarks. In the run up to this year’s Secret Life of Students, we’re looking at that – through a survey and focus groups – and if you’re willing to put the survey out to your students, or can nominate course reps to get involved, we’d love to hear from you.

    More broadly, it all means answering a question that higher education has been avoiding since long before AI arrived – what is this actually supposed to do to people? Not what they should know, but what they should become.

    And whether, on balance, it’s OK to throw that box of old notepads out now. It’ll make me a better person.

    Source link

  • The pipeline for women education leaders is broken. They need real systems of support and sponsorship

    The pipeline for women education leaders is broken. They need real systems of support and sponsorship

    by Julia Rafal-Baer, The Hechinger Report
    February 3, 2026

    In matters both big and small, women in education leadership are treated, spoken to and viewed differently than their male colleagues. And it impacts everything from their assignments and salaries to promotions. 

    The career moves available to aspiring women leaders often set them up to lead in the toughest conditions in schools and districts with the highest stakes and the least margin for error. When states and districts fail to confront the reality of this glass cliff, they constrain the advancement of some of their most capable current and would-be leaders.  

    New survey data from the nonprofit I founded, Women Leading Ed, illuminates the experiences and perspectives of women who confront the bias that creates and reinforces both the glass cliff and the glass ceiling. And research on women in education leadership points to the same conclusion: The gender gap will persist unless states and districts make systemic changes to how leaders are recruited, trained, supported and advanced through the career ladder. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

    But the glass cliff doesn’t have to be the end of the road for women in education leadership. If more leaders — women and, critically, men — take even a few steps forward, we can build a bridge to a future in which every leader can reach their full potential.  

    Here are four ways: 

    Sponsorship and Coaching. Women in education leadership need real systems of support, with a shift from mentoring to sponsorship. This calls for both women and men to take an active role in advancing up-and-coming leaders, at all stages, who can benefit from on-the-job coaching.  

    Sponsorship and coaching relationships can be game changers, the data from our 2025 Women Leading Ed insight survey found. What’s more, they provide excellent opportunities for men to become allies in advancing gender equality. 

    Dr. Kyla Johnson-Trammell, the superintendent emeritus of schools in Oakland, recently recalled having a male coach when she started out. He served as a coach and sponsor, helping her connect to other superintendents.  

    “This man coached me for two years every Friday,” Johnson-Trammell recounted. “He helped me and pushed me to be the leader I wanted to be as a Black woman. … His sponsorship helped open up doors to accessing people, it helped me to connect to other superintendents.” 

    Promotion and Hiring. If we want different results, we have to change the systems of evaluation, promotion and hiring. That means recruiting beyond the usual networks, building hiring committees with varied viewpoints and training decision-makers to use structured processes and consistent criteria.  

    One example: Having a finalist pool with just two women candidates made it 79 times more likely that a woman would be hired than if there was just one candidate, research published in the Harvard Business Review found.  

    More broadly, the existing education leadership pipeline continues to disadvantage women, data from the U.S. Department of Education shows. The 2023-24 Women Leading Ed survey results demonstrated that women are predominantly funneled toward elementary school leadership and academic pathways that keep their trajectory below the top job in the district or state.  

    Men, however, are elevated to high school principalships or district positions that include fiscal or operational roles, precisely the kind of experiences that are prioritized during superintendent search processes.  

    Our 2023-24 survey results underscored this divergence. Of respondents who had been principals, fewer than 20 percent had served in a high school. Overall, just over one in 20 respondents had held a finance or operations role.  

    One respondent, a senior leader in a large urban school district, captured the bias of the skewed leadership pipeline succinctly: “I was told I’m too petite to be anything but an elementary principal,” she wrote.  

    Supports and Benefits. District and state leaders can transform who advances and leads their systems by providing systems of support for women in leadership and fostering fairer hiring and promotion decisions. 

    Family and well-being supports that sustain all leaders are essential to advance more women leaders. These include parental leave, child care, elder care time and scheduling flexibility.  

    Rising to a top district leadership position comes with costs for women that men typically do not absorb. Fully 95 percent of women superintendents believe that they must make professional sacrifices that their male colleagues do not make. 

    Some of our survey respondents reported working long hours while neglecting family, under pressure to maintain unrealistic expectations at the office. Another pointed out the additional responsibilities that women often carry in their personal lives, including the care of children or parents, attending school events and family members’ doctor appointments.  

    Related: OPINION: Women education leaders need better support and sponsorships to help catch up 

    Added pressure at work and greater responsibilities at home lead to burnout: Roughly six out of 10 respondents to our 2023-24 survey said they have thought about leaving their current position due to the stress and strain; three-quarters said they think about leaving daily, weekly or monthly.  

    Providing high-quality benefits can be a key lever for addressing these underlying gender inequalities. So can offering flexible work schedules, hybrid work arrangements and remote work options that provide elasticity in where and when work gets done.  

    Public Goals. Finally, systems, not just individuals, must be accountable. Setting public goals for increasing the number of highly qualified women serving on boards and in senior management is a start. Real accountability means tracking outcomes.  

    This should also include ensuring equal pay for equal work. About half the superintendents we surveyed in 2023-24 said they had conversations or negotiations about their salary in which they felt their gender influenced the outcome. 

    Solutions: pay-equity audits, increased transparency around compensation and the inclusion of salary ranges in job postings. These can be powerful steps toward achieving pay equality.  

    Nearly 900 bipartisan men and women leaders have signed an open letter calling for the adoption of these strategies.  

    This is a movement that is both growing and vital, as research makes clear that women continue to face a different set of rules than men in leadership. Too often, states and districts respond to the glass ceiling and glass cliff with window dressing rather than the actual reform needed to change the status quo.  

    Julia Rafal-Baer is the founder and CEO of Women Leading Ed, a national network for women in education leadership. 

    Contact the opinion editor at [email protected]. 

    This story about women education leaders was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-women-education-leaders-support-system-sponsorship/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=114622&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-women-education-leaders-support-system-sponsorship/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • What grade inflation panics miss about the real value of higher education

    What grade inflation panics miss about the real value of higher education

    Cloaks swish. Cameras flash. It’s graduation day, the culmination of years of effort. It celebrates learning journeys whose outcomes have nurtured the realisation of talents as varied as our students themselves.

    It is a triumphant moment. It is also the moment in which the sector reveals the outcome of its own Magic Sorting Hat, whose sorcery is to collapse all this richness into a singular measure. As students move across the stage to grasp the sweaty palm of the VC or a visiting dignitary, they are anointed.

    You are a First. You are a Third. You are a 2:1.

    There is something absurd about this, that such diverse, hard-won successes can be reduced to so little. That absurdity invites a bit of playfulness. So, indulge me in a couple of thought experiments. They are fun, but I hope they reveal something more serious about the way we think about standards, and how often that crowds out a conversation about value.

    Thought experiment one: What if classifications are more noise than signal?

    Let us begin with something obvious. Like any set of grades, classifications exist to signal a hierarchy. They are supposed to say something trustworthy about the distribution of talent – where a First signals the pinnacle of academic mastery. What “mastery” is – and how relevant that signal is beyond the academy – is a point I think we should dwell on as ambiguous.

    “Mastery” isn’t the upper tier of talent. Our quality frameworks do not, by principle, norm reference, and for good reason that are well-worn in assessment debates: shaving off a top slice of talent would exclude cohorts of students who might, in a less competitive year, have made the cut. So, then, we criterion reference; we classify against the extent to which programme outcomes have been met to a high standard. On that logic, we ought to be delighted when more and more students meet those standards. Yet when they do, we shift uneasily and brace for assaultive chorus of “dumbing down.”

    The truth of the First feels even less solid when set against the range of disciplinary and transdisciplinary capabilities we try to pack into that single measure, and the range of contexts that consume it at face value. They use it to rank and sort for their own purposes; to make initial cuts of cohorts of prospective employees to make shortlists manageable, for instance, with troubling assumptive generalisation. That classification is paradoxically a very thin measure, and one that is overloaded with meaning.

    It is worth asking how we ended up trusting so much to a device designed for a quite different era. The honours classification system has nineteenth-century roots, but the four-band structure that still dominates UK higher education really bedded in over the last century. The version we live with now is an artefact of an industrial-era university system; built in a world that imagined talent as a fixed trait and universities as institutions that sorted a small elite into neat categories for professional roles. It made sense for a smaller, more homogeneous system, but sits awkwardly against the complex and interdisciplinary world students now graduate into.

    Today it remains a system that works a bit like a child’s play dough machine. Feed in anything you like, bright colours, different shapes and unique textures, and the mechanism will always force them into the same homogenous brown sausage. In the same way, the classification system takes something rich and individual and compresses it into something narrow and uniform. That compression has consequences.

    The first consequence is that the system compresses in all sorts of social advantages that have little to do with academic mastery. Access to cultural capital, confidence shaped by schooling, freedom from financial precarity, familiarity with the tacit games of assessment. These things make it easier for some students to convert their social position into academic performance. Despite the sector’s valiant reach for equity, the boundary between a 2:1 and a 2:2 can still reflect background as much as brilliance, yet the classification treats this blend of advantage as evidence of individual superiority.

    The second consequence is that the system squeezes out gains that really matter, but that are not formally sanctioned within our quality frameworks. There is value in what students learn in that space a university punctuates, well beyond curriculum learning outcomes. They navigate difficult group dynamics. They lead societies, manage budgets and broker solutions under pressure. They balance study with work or caring responsibilities and develop resilience, judgement, confidence, and perspicacity in ways that marking criteria cannot capture. For many students, these experiences are the heart of their learning gains. Yet once the classification is issued, that can disappear.

    It is easy to be blithe about these kinds of gains, to treat them as nice but incidental and not the serious business of rigorous academic pursuit. Yet we know this extra-curricular experience can have a significant impact on student success and graduate futures, and it is relevant to those who consume the classification. For many employers, the distinctive value that graduates offer over non-graduates is rarely discipline specific, and a substantial proportion of graduates progress into careers only tangentially aligned to their subjects. We still sell the Broader Benefits of Higher Education™, but our endpoint signaling system is blind to all of this.

    The moral panic about grade inflation then catches us in a trap. It draws us into a game of proving the hierarchy is intact and dependable, sapping the energy to attend to whether we are actually evidencing the value of what has been learned.

    Thought experiment two: What if we gave everyone a First?

    Critics love to accuse universities of handing out Firsts to everyone. So, what if we did? Some commentators would probably implode in an apoplectic frenzy, and that would be fun to watch. But the demand for a signal would not disappear. Employers and postgraduate providers would still want some way to differentiate outcomes. They would resent losing a simple shorthand, even though they have spent years complaining about its veracity. Deprived of the simplicity of the hierarchy, we would all be forced into a more mature conversation about what students can do.

    We could meet that conversation with confidence. We could embrace and celebrate the complexity of learning gain. We could shift to focus on surfacing capability rather than distilling it. Doing so would mean thinking carefully about how to make complexity navigable for external audiences, without relying on a single ranking. If learning gains were visible and tied directly to achievement, rather than filtered through an abstract grading function, the signal becomes more varied, more human, and more honest.

    Such an approach would illuminate the nuance and complexity of talent. It would connect achievement to the equally complex needs of a modern world far better than a classification ever could. It would also change how students relate to their studies. It would free them from the gravitational pull of a grade boundary and the reductive brutality that compresses all their value to a normative measure. They could invest their attention in expansive and divergent growth, in developing their own distinctive combinations of talents. It would position us, as educators, more clearly in the enabling-facilitator space and less in the adversarial-arbiter space. That would bring us closer to the kind of relationship with learners most of us thought we were signing up for. And it would just be …nicer.

    Without classifications the proxy is gone, and universities then hold a responsibility to ensure that students can show their learning gains directly, in ways that are clear, meaningful, and relevant.

    A future beyond classifications

    The sector is capable of imagination on this question – and in the mid-2000s it really did. The Burgess Review was our last serious attempt to rethink classifications. It was also the moment in which our courage and imagination faltered in their alignment.

    The Burgess conclusion was blunt. The classification system was not fit for purpose. The proposed alternative was the Higher Education Achievement Report (HEAR), designed to give a much fuller account of a student’s learning. HEAR was meant to capture not only modules and marks, but the gains in skills, knowledge, competence and confidence that arise from a wider range of catalysts: taught courses, voluntary work, caring responsibilities, leadership in clubs and societies, placements, projects and other contributions across university life. It would show the texture of what students had done and the value they could offer, rather than a single number on a certificate.

    Across Europe, colleagues were (and are) pursuing similar ambitions. Across Bologna-aligned countries, universities have been developing transcript systems that are richer, more contextual and more personalised. They have experimented with digital supplements, unified competence frameworks, micro-credentials and detailed records of project work. The mission is less about ranking learners and more about describing learning. At times, their models make our narrow transcript look a little embarrassing.

    HEAR sat in the same family of ideas, but the bridge it offered was never fully crossed. The system stepped back, HEAR survived as an improved transcript, the ambition behind it did not. And fundamentally, the classification remained at the centre as the core value-signal that overshadowed everything else.

    Since then, the sector has spent roughly two decades tightening algorithms, strengthening externality and refining calibration. Important work, but all aimed at stabilising the classification system rather than asking what it is for – or if something else could do the job better.

    In parallel, we have been playing a kind of defensive tennis, batting back an onslaught of accusations of grade inflation from newspapers and commentators that bleed into popular culture and a particular flavour of politics. Those anxieties now echo in the regulatory system, most recently in the Office for Students’ focus on variation in the way institutions calculate degrees. Each time we rush to prove that the machinery is sound – to defend the system rather than question it – we bolster something fundamentally flawed.

    Rather than obsessing over how finely we can calibrate a hierarchy, a more productive question is what kind of signal a mass, diverse system really needs, and what kinds of value we want to evidence. Two growing pressures make that question harder to duck.

    One is the changing conversation about the so-called graduate premium. For years, policymakers and prospectuses have leaned on an article of faith: do a degree, secure a better job.

    Putting aside the problematics of “better,” and the variations across the sector, this has roughly maintained as true. A degree has long been a free pass through the first gates of a wide range of professions. But the earnings gap between graduates and non-graduates has narrowed, and employers are more openly questioning whether lack of a university degree should necessarily preclude certain students from their roles. In this context, we need to get better at demonstrating graduate value, not just presuming it.

    The other pressure is technological. In a near future where AI tools are routine in almost every form of knowledge work, outputs on their own will tell us less about who can do what. The central question will not be whether students have avoided AI, but whether they can use it in the service of their own judgement, originality and values. When almost anyone can generate tidy text or polished slides with the same tools, the difference that graduates make lies in qualities that are harder to see in a single grade.

    If the old proxy is wobbling from both sides, we need a different way of showing value in practice. That work has at least three parts: how we assess, what students leave with, and how we help them make sense of it.

    How we assess

    Authentic assessment offers one answer; assessment that exercises capability against contexts and performances that translate beyond the academy. But the sector rarely unlocks its full potential. Too often, the medium changes while the logic remains the same. An essay becomes a presentation, a report becomes a podcast, but the grade still does the heavy lifting. Underneath, the dominant logic tends to be one of correspondence. Students are rewarded for replicating a sanctioned knowledge system, rather than for evidencing the distinctive value they can create.

    The problem is not that colleagues have failed to read the definitions. Most versions of authentic assessment already talk about real-world tasks, audiences and stakes. The difficulty is that, when we try to put those ideas into practice, we often pull our punches. Tasks may begin with live problems, external partners or community briefs, but as they move through programme boards and benchmarking they get domesticated into safer, tidier versions that are easier to mark against familiar criteria. We worry about consistency, comparability, grade distributions. Anxieties about loosening our grip on standards quietly win out over the opportunity to evidence value.

    When we resist that domestication, authentic tasks can generate artefacts that stand as evidence of what students can actually do. We don’t need the proxy of a grade to evidence value; it stands for itself. Crucially, the value they surface is always contextual. It is less about ticking off a fixed list of behaviours against a normative framework, and more about how students make their knowledge, talents and capacities useful in defined and variable settings. The interesting work happens at the interface between learner and context, not in the delivery of a perfectly standardised product. Grades don’t make sense here. Even rubrics don’t.

    What students leave with

    If we chose to take evidencing learning gains seriously, we could design a system in which students leave with a collection of artefacts that capture their talents in authentic and varied ways, and that show how those talents play out in different contexts. These artefacts can show depth, judgement and collaboration, as well as growth over time. What is lost is the “rigour” and sanction of an expert judgement to confirm those capacities. But perhaps here, too, we could be more creative.

    One way I can imagine this is through an institutional micro-credential architecture that articulates competences, rather than locking them inside individual modules. Students would draw on whatever learning they have done, in the curriculum, around it and beyond the university, to make a claim against a specific micro-credential built around a small number of competency statements. The assessment then focuses on whether the evidence they offer really demonstrates those competencies.

    Used well, that kind of system could pull together disciplinary work, placements and roles beyond the curriculum into a coherent profile. For those of us who have dabbled in the degree apprenticeship space, it’s like the ultimate end-point assessment, with each student forging a completely individualized profile that draws in disciplinary capabilities alongside adjunct and transdisciplinary assets.

    For that to be more than an internal hobby, it needs to rest on a shared language. The development of national skills classification frameworks in the UK might be providing that for us. It is intended to give us a common, granular vocabulary that spans sectors and occupations, and that universities could use as a reference point when they describe what their graduates can do.

    The trouble is, I doubt, that this kind of skills-map-as-transcript can ever really flourish if it must sit in the shadow of a single classification. That was part of HEAR’s problem. It survived as a supplement while the degree class kept doing the signaling. If we are serious about value, we may eventually need to let go of the single upper-case proxy altogether. Every student would leave not with a solitary number, but with a skills profile that is recognisably linked to their discipline and shaped by everything else they have learned and contributed in the years they spent with us.

    How students make sense of it

    Without support to make sense of their evidence, richness risks becoming noise of a different kind. This is one reason classifications remain attractive. They collapse complexity into simplicity. They offer a single judgement, even if that judgement obscures more than it reveals.

    Students need help to unify their evidence into a coherent narrative. It is tempting to see that as the business of careers and employability services alone, but that would be a mistake. This is a whole-institution task, embedded in curriculum, co-curriculum and the wider student experience.

    From conversations within courses to structured opportunities for reflection and synthesis, students need the means to articulate their value in ways that match their aspirations. They need to design imagined future versions of their stories, develop assets to make them real, test them, succeed and fail, and find direction in serendipity. This project of self, and arriving at that story – a grounded account of who they are now, what they can do and where they might go next – is arguably the apex output of a higher education. It is the point at which years of dispersed learning start to cohere into a sense of direction. And it feels like a very modern version of the old ideal of universities as a place to find oneself.

    Perhaps the sector is now better placed, culturally and technologically, to build that kind of recognition model rather than another supplement. Or at the very least, perhaps the combined pressure of AI and a more skeptical conversation about the graduate premium offers enough of a burning platform to make another serious attempt unavoidable.

    A reborn signal

    I am being playful. I do not expect anyone to actually give every student a First. Classifications have long endured, and they will not disappear any time soon. Any institution that chose to step away from them would be taking a genuine act of brinkmanship. But when confronted with accusations of grade inflation, universities defend their practices with care and detail. What they defend far less often is their students, whose talents and achievements are flattened by the very system we insist on maintaining. We treat accusations of inflation as threats to standards, rather than prompts to talk about value.

    The purpose of these thought experiments is to renew curiosity about what a better signal might look like. One that does justice to the richness of learners’ journeys and speaks more honestly about the value higher education adds. One that helps employers, communities and students themselves to see capability in a world where tools like AI are part of the furniture, and where value is found in how learning connects with real contexts.

    At heart, this is about what and whom we choose to value, and how we show it. Perhaps it is time to return to the thread Burgess began and to pick it up properly this time, with the courage that moment represented and the bravery our students deserve.

    Join Mark and Team Wonkhe at The Secret Life of Students on Tuesday 17 March at the Shaw Theatre in London to keep the conversation going about what it means to learn as a human in the age of AI. 

    Source link

  • We must help the next generation get from classrooms to careers with real guidance, not guesswork

    We must help the next generation get from classrooms to careers with real guidance, not guesswork

    by Jason Joseph, The Hechinger Report
    December 2, 2025

    Too many high school graduates are unsure how their education connects to their future. Even the most driven face a maze of options, with little guidance on how classroom experiences connect to real-world careers. 

    It’s no wonder that fewer than 30 percent of high school students feel “very prepared” to make life-after-graduation decisions, according to a recent study. 

    This isn’t just an education gap; it’s an economic fault line. During this period of significant economic transition, when the labor market is demanding specialized skills and adaptability, students must be prepared for what comes next. 

    And yet they are not, in part because our job market is increasingly opaque to those without established networks. Many jobs are filled through networking and referrals. But few young people have access to such resources, and the result is a generation attempting to launch careers through guesswork instead of guidance. This lack of access is hindering not only the repopulation of America’s workforce but also American competitiveness on the world stage. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.  

    Consider this: Some 45 percent of employers struggle to fill entry-level roles — often because applicants lack the skills they need, a 2023 McKinsey survey found. Yet nearly half of recent college graduates end up underemployed, Higher Ed Dive reports, providing clear evidence of a disconnect between degrees earned and jobs available. 

    At the same time, many young people’s post-pandemic disengagement and companies’ growing interest in skills-based hiring and increasing automation have altered the employment landscape forever. 

    So let’s be clear — we need a top-to-bottom shift from reactive hiring to the pragmatic creation of more intentional pathways. Bipartisan voices are calling for better alignment between K-12 education and workforce needs. Attempting to improve this alignment, in turn, offers critical opportunities to invest in career navigation and employer engagement systems.  

    Some states are already demonstrating what’s possible. In South Carolina, SC STEM Signing Day honors students from every county who choose career paths in STEM, regardless of whether they’re attending a four-year college, a two-year program or starting a skilled apprenticeship.  

    This initiative reflects a broader truth: Higher education is one of many valuable pathways, but not the only one.  

    Initiatives such as SC Future Makers have facilitated tens of thousands of virtual conversations between students and professionals, helping young people understand real-world connections between classroom skills and career outcomes.  

    This model, which pairs digital scale with local relevance, offers a replicable playbook. And it’s working elsewhere. Tallo, a career development platform, powers dozens of virtual employer events and digital campaigns each year, from regional showcases to national hiring days. In partnership with AVID and SME, Tallo has helped young people secure job interviews, land internships and earn recognized credentials. 

    States like Indiana and Tennessee are also finding new ways to connect degrees to jobs. Through programs like Next Level Jobs and Tennessee Pathways, these states incentivize employer engagement in high school career navigation and align funding to skills-based training.  

    Related: What happened when a South Carolina city embraced career education for all its students 

    All these models emphasize scalable, bipartisan approaches, and they are not only much needed and possible — they’re already in motion. 

    The consequences of career misalignment extend beyond personal frustration — they ripple across the economy. Youth disconnection cost American taxpayers billions of dollars in government expenditures and in tax revenue lost.  

    Closing this gap is thus both a moral imperative and an economic strategy. Technology is ultimately playing a growing role in helping students make more informed decisions about their future. 

    Of course, real obstacles remain: resource constraints, outdated mindsets and legacy policies often slow progress. Yet successful states, communities and technological platforms are proving that it’s possible to build flexible, sustainable models when schools, employers and local leaders align around shared goals: coordinated investment, public-private alignment and bold leadership to move from promising pockets to national progress.  

    The stakes could not be higher. We need career pathways to succeed. 

    This is a generation ready to act if we give them the tools. That means better data, stronger networks and clearer paths forward.  

    Let’s replace chance with strategy and replace confusion with opportunity. 

    With smarter systems and stronger collaboration, we can help more young people build meaningful careers and meet the needs of a changing economy. 

    Jason Joseph is corporate chief of staff at Stride Inc., a leading education company that has served more than two million students nationwide. 

    Contact the opinion editor at [email protected]. 

    This story about career education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-we-must-help-the-next-generation-get-from-classrooms-to-careers-with-real-guidance-not-guesswork/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113600&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-we-must-help-the-next-generation-get-from-classrooms-to-careers-with-real-guidance-not-guesswork/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Ways to optimize college for real world experience

    Ways to optimize college for real world experience

    “Top Ways To Optimize College Education For

    The Real World Work Environment

    There’s a tremendous amount of work—and sustained effort—that goes into guiding a high school student through graduation and into a great college or university. But once they arrive on campus at their dream school, students quickly learn that a whole new set of exciting (and often challenging) expectations awaits them.

    One of the most important things we do as advisors is help families optimize their efforts—not just in high school, but throughout the college years as well. Preparing for a successful college experience and a rewarding career takes more than financial planning. It requires strategy, self-awareness, and an understanding of what truly matters over the next four years.

    Because here’s the reality: getting into college is a big achievement, but it doesn’t mean much if a student becomes part of the roughly 32% of college freshmen who never complete their bachelor’s degree. And even among those who do graduate, many enter the workforce without the skills, direction, or experiences that make them competitive job candidates.

    With this in mind, this month’s newsletter highlights several key steps students can take to make their college years meaningful preparation for life after graduation. Students who use these strategies early and intentionally can avoid the frustration far too many new graduates face—earning a diploma but struggling to find a rewarding job.

    After reviewing this month’s newsletter, if you have questions about helping your student prepare for college—and everything that comes after—please reach out. We’re here to support both the academic and the financial pieces of the journey, and our guidance can strengthen your family’s planning for the exciting years ahead.


    1) Begin With the End in Mind

    Some students start college with a clear career path. Many do not. Both situations are perfectly normal—but students without a firm plan should use the early college years to explore interests, build strong academic habits, and open doors for future opportunities.

    A smart first step is front-loading required courses. Knocking out general education classes early gives students more flexibility later—exactly when internships, major coursework, and professional opportunities start to emerge. It also helps them adjust to the academic rigor of college without the added pressure of advanced major-specific classes.

    Students who enter college knowing their intended career path can benefit from the same approach. General education courses are unavoidable, but careful planning—often with the help of an advisor—can reveal classes that count toward both major and core requirements. This streamlines the path to graduation and keeps future options wide open.


    2) Work With Good Academic Advisors

    A good academic advisor is worth their weight in gold. Many colleges assign advisors simply by last name or department availability. While these advisors can help students understand which classes meet which requirements (and that’s important!), they aren’t always the best resource for career-specific guidance.

    Most campuses also have specialty advising offices for competitive career tracks like medicine, law, engineering, or business. These advisors understand the nuances of graduate school applications, interviews, and prerequisite planning.

    Outside of campus, professionals in a student’s field of interest can offer invaluable real-world insight. A strong advisor—whether found inside or outside the university—helps students understand not just what to study, but why it matters for their long-term goals.

    The bottom line: students should actively seek accurate, timely, and career-aligned advice—not just settle for the first advisor they’re assigned.


    3) Don’t Ignore the Value of a Minor

    Majors get most of the attention, but minors can be incredibly useful. They require fewer courses, yet they still add depth and versatility to a student’s academic profile.

    A minor can:

    • highlight a secondary area of interest

    • demonstrate broader skills

    • add practical abilities (like a second language or computer programming)

    • naturally emerge from completing certain prerequisites

    For example, many pre-med students accidentally complete a chemistry minor simply by taking the courses required for medical school applications.

    Minors also look great on résumés. They show commitment, intellectual curiosity, and a willingness to explore beyond the basics.


    4) Diversify Your Options

    We always encourage students to work hard toward their goals—but to stay open-minded, too. Success rarely follows a straight line. Career paths evolve, interests shift, and opportunities arise in unexpected places.

    Students who diversify their plans—by exploring different fields, staying curious, and being open to new experiences—often discover opportunities they never knew existed. Flexibility, paired with ambition, is a powerful combination.

    Encourage your student to aim high, stay engaged, and keep their eyes open. College is a time of tremendous discovery, and the students who embrace that mindset often enjoy the most rewarding outcomes.


    Until next month,

    Source link

  • The NO FAKES Act is a real threat to free expression

    The NO FAKES Act is a real threat to free expression

    Imagine a fourth-grade classroom in which the teacher uses AI to generate a video of Ronald Reagan explaining his Cold War strategy. It’s history in living color, and the students lean in, captivated. Now imagine that same teacher facing thousands of dollars in damages under the proposed NO FAKES Act because the video looks too real.

    That’s not sci-fi. It’s a risk baked into this bill. The NO FAKES Act, introduced this year in both the House and Senate, would create a new federal “digital replication right” letting people control the use of AI-generated versions of their voice or likeness. That means people can block others from sharing realistic, digitally created images of them. The right can extend for up to 70 years after the person’s death and is transferred to heirs. It also lets people sue those who share unauthorized “digital replicas,” as well as the companies that make such works possible.

    A “digital replica” is defined as a newly created, highly realistic representation “readily identifiable” as a person’s voice or likeness. That includes fully virtual recreations and real images or recordings that are materially altered. 

    The bill bans unauthorized public use or distribution of “digital replicas.” But almost all of the covered “replicas” are fully protected by the First Amendment, meaning Congress cannot legislate their suppression.

    Can someone own a voice? Breaking down the right of publicity.

    What to do if a company makes a copy of your voice and profits from it without your permission.


    Read More

    The bill does list exceptions for “bona fide” news, documentaries, historical works, biographical works, commentary, scholarship, satire, or parody. But there’s a catch. News is exempt only if the replica is the subject of, or materially relevant to, the story. At best, this means any story relating to, say, political deepfakes must be reviewed by an attorney to decide if the story is “bona fide” news and the deepfake is sufficiently relevant to include in the story itself. At worst, this means politicians and other public figures will start suing journalists and others who talk about newsworthy replicas of them, if they don’t like what the person had to say. 

    Even worse, the documentary, historical, and biographical exceptions vanish if the work creates a false impression that it’s “an authentic [work] in which the person actually participated.” That swallows the exception and makes any realistic recreations, like the fourth-grade example above, legally radioactive.

    The reach goes well beyond classrooms, too. Academics using recreated voices for research, documentarians patching gaps in archival footage, artists experimenting with digital media, or writers reenacting leaked authentic conversations could all face litigation. The exceptions are so narrowly drawn that they offer no real protection. And the risk doesn’t end with creators. Merely sharing a disputed clip can also invite a lawsuit.

    That’s a digital heckler’s veto whereby one complaint can erase lawful speech.

    The law also targets AI technology itself. Section 2(c)(2)(B) imposes liability on anyone who distributes a tool “primarily designed” to make digital replicas. That vague standard can easily ensnare open-source developers and small startups whose generative AI models sometimes output a voice or face that resembles a real person. 

    Then there’s the “notice-and-takedown” regime, modeled after the Digital Millennium Copyright Act. The bill requires online platforms to promptly remove or disable access to any alleged unauthorized “digital replica” once they receive a complaint, or risk losing legal immunity and facing penalties. In other words, platforms that don’t yank flagged content fast enough can be on the hook, which means they’ll likely delete first and ask questions never. That’s a digital heckler’s veto whereby one complaint can erase lawful speech.

    On paper, the NO FAKES Act just looks like a safeguard against misleading and nonconsensual deepfakes. In practice, it would give politicians, celebrities, and other public figures new leverage over how they’re portrayed in today’s media, and grant their families enduring control over how they can be portrayed in history.

    And let’s not forget that existing law already applies to digital replicas. Most states already recognize a right of publicity to police commercial uses of a person’s name, image, or likeness. Traditionally, that protection has been limited to overtly commercial contexts, such as advertising or merchandising. The NO FAKES Act breaks that guardrail, turning a narrow protection into a broad property right that threatens the First Amendment.

    Creativity cannot thrive under constant permission. New mediums shouldn’t mean new muzzles. 

    AI-generated expression, like all expression, can also be punished when it crosses into unprotected categories such as fraud or defamation. Beyond those limits, government restrictions on creative tools risks strangling the diversity of ideas and free speech makes possible. 

    Creativity cannot thrive under a constant need for permission. New mediums shouldn’t mean new muzzles. 

    Source link

  • Bringing Real Transparency to College Pricing

    Bringing Real Transparency to College Pricing

    In today’s unpredictable higher education marketplace, TuitionFit, created by Mark Salisbury, offers something that colleges and universities have refused to provide—clear and honest information about what students actually pay. By gathering and anonymizing financial aid offers that students submit voluntarily, TuitionFit makes visible the hidden world of tuition discounting, where sticker prices are inflated but rarely reflect reality.

    The statistics show just how broken and confusing the system has become. For the 2024–25 academic year, private nonprofit colleges awarded institutional grants that equaled 56.3 percent of the published sticker price for first-time, full-time undergraduates and 51.4 percent for all undergraduates. In other words, more than half of published tuition is an illusion. Despite average published tuition of $11,610 at public four-year in-state colleges and $43,350 at private nonprofit institutions, the real net tuition and fees that students pay is far lower. At public four-year schools, inflation-adjusted net tuition has fallen from $4,340 in 2012–13 to $2,480 in 2024–25, while net tuition at private nonprofits has gradually declined from $19,330 in 2006–07 to $16,510 in 2024–25. Families who see terrifying sticker prices often don’t realize that the average all-in, post-aid cost of a four-year degree is closer to $30,000.

    These numbers also reveal deep inequities. At very selective private institutions in 2019–20, low-income students paid about $13,410 after aid, while wealthier peers often paid nearly $39,250. Such disparities are rarely explained by colleges themselves, who prefer to mask their discounting practices with vague averages and opaque award letters.

    This is why TuitionFit is so important. Instead of navigating by distorted averages or marketing spin, students and families can see what peers with similar academic and financial profiles are actually paying. That knowledge provides leverage in negotiating aid offers and choosing institutions that will not leave them with crushing debt. In an era when sticker prices continue to climb while net prices quietly decline, TuitionFit brings clarity at the individual level.

    The Higher Education Inquirer commends Salisbury and TuitionFit for providing a measure of transparency in a system that thrives on opacity. While it cannot by itself resolve the structural inequities of American higher education finance, it arms students and families with something they desperately need: the truth.

    Source link

  • To make real progress on widening participation in higher education, we need a new mission

    To make real progress on widening participation in higher education, we need a new mission

    The promise of higher education as a pathway to opportunity has never been more important, or more precarious.

    While overall university participation has reached record levels, this headline figure masks a troubling reality: where you’re born in England increasingly determines whether you’ll ever set foot on a university campus. And even once students do get their foot in the door, they might not have the support system in place – financially as well as academically – to succeed and thrive.

    It is in this context that the UPP Foundation has today published the concluding paper in its widening participation inquiry. Mission Critical: six recommendations for the widening participation agenda is our attempt to fill in the gaps that the government left in its opportunity mission around widening participation, and to provide targets and mechanisms by which it can achieve success in this area.

    Doing “getting in” right

    For years, the biggest single aim of widening participation work has been “getting in” – ensuring that young people from disadvantaged backgrounds are supported to attend university, most often by undertaking a bachelor’s degree as a residential student. The aim of growing participation has come under political scrutiny in recent years and is no longer an accepted mission across the political spectrum.

    But as our inquiry’s earlier papers highlight, there remains significant gaps in participation. Although more young people are going to university than ever before, there are stark disparities in the rates at which young people from different parts of the country attend university. If we believe, as I do, that talent is not simply concentrated in London and the South East, then by implication if opportunity is spread out more evenly, participation in higher education needs to grow.

    That’s why our first recommendation is a “triple lock” widening participation target. This includes a gap of no more than ten percentage points between the highest and lowest regional HE participation rates; plus a 50 per cent floor for progression to HE at 18-19 across all regions; and a target for 70 per cent of the whole English population to have studied at level 4 or above by the age of 25, as advocated by Universities UK. Meeting these targets will ensure that “getting in” really is for everyone.

    Onwards and upwards

    But this is not enough in isolation. The people we spoke to in Doncaster and Nottingham made it clear that “getting on” and “getting out” are equally important parts of the widening participation struggle – with the cost of learning a major barrier to full participation in university life.

    With that in mind, we’re calling for the restoration of maintenance loans to 2021 real-terms levels by the end of the decade, as well as additional maintenance grants for those eligible for free school meals in the last six years.

    We also want universities that are currently spending millions of pounds on bursaries and hardship funds to put that money towards outreach in the most challenging cold spots, as well as ensuring that the wider student experiences that undergrads cherish are available to all. That’s why it makes sense for a proportion of the proceeds from the proposed international student fee levy, if introduced, to be ring fenced to support an expanded access and participation plan regime, prioritising disadvantaged students from cold spot backgrounds.

    Revitalisation

    Finally, widening participation needs to address the short-term mindset that grips young people both before and during their time at university.

    Young people are more mindful of their finances than ever before, with many opting out of university in favour of a job in places where graduate careers are scarce and those who do choose to attend keeping one eye on their present and future earnings even before they’ve graduated.

    If we are to revitalise the widening participation agenda, we have to bring employability to the fore, both by reconfiguring the Office for Students’ B3 metric on positive student outcomes and by bringing employers into the design and outputs of university study. There are already fantastic examples of this working in practice across the sector, such as at London South Bank’s energy advice centre and Bristol University’s career- and community-oriented dental school. It’s time for the sector to pick up these ideas and run with them.

    The young person in Doncaster with the same grades and aspirations as their counterpart in Surrey faces not just different odds of getting to university, but different expectations about what’s possible. When we fail to address these disparities, we’re not just perpetuating inequality, we’re actively weakening the economic foundations that the whole country depends on.

    What our new report offers is a chance to refocus the widening participation agenda around a series of ambitious but achievable targets. Getting in, getting on and getting out are all crucial parts of the higher education cycle, especially for those who otherwise wouldn’t attend. If the government want to take their widening participation priorities seriously, all three aspects need to take their place in the sun.

    Source link