Tag: Graduate

  • Cost Is Graduate Enrollment “Gatekeeper”

    Cost Is Graduate Enrollment “Gatekeeper”

    Many graduate programs face funding cuts, enrollment declines and uncertain futures, but a new report describes cost of attendance as the “ultimate gatekeeper” to enrollment.

    Between Aug. 20 and Sept. 8, 2025, the enrollment management consulting firm EAB surveyed 8,106 current and prospective graduate and adult learners about their motivations, financial concerns, program search methods and program preferences.

    The findings, published Thursday in EAB’s 2025 Adult Learner Survey, show that cost ranked as the most important factor in enrollment decisions, surpassing program accreditation, which was last year’s top factor.

    The majority of prospective students (60 percent) said they would eliminate a program from consideration if they perceived it to be “too expensive.” Although data from the National Center for Education Statistics shows that the average annual cost of graduate school is more than $20,000, EAB’s survey found that 39 percent of learners believe anything more than $10,000 is too expensive; 62 percent said they wouldn’t be willing to pay more than $20,000 a year for graduate school.

    “The hopes and expectations of today’s adult learners are colliding with a financial aid system in a period of significant transition,” Val Fox, a senior director and principal in EAB’s adult learner recruitment division, said in a news release. “Federal aid sources are shrinking, and students with low credit scores may not qualify for private loans. This mismatch will make it even harder to sustain enrollment at a time when institutions need domestic adult learners more than ever.”

    Learners’ heightened concerns about cost come as graduate programs also grapple with new federal policies—including caps on graduate student loans, cuts to research funding and visa restrictions for international students—that are making it even harder for institutions to balance their budgets and attract new students.

    At the same time, however, graduate students and adult learners increasingly rely on outside funding. Scholarships were the most commonly cited funding source (52 percent), followed by financial aid, loans or grants, though both categories fell several percentage points compared to last year. Meanwhile, the report found that 25 percent of respondents cited personal or household income as one of their top five funding sources this year, compared to more than 40 percent last year.

    “Success for U.S. graduate schools in 2026 will depend heavily on their ability to adapt recruiting strategies to accommodate policy shifts and evolving student priorities,” Fox said. “Schools need to communicate costs clearly, especially on digital channels, and align their value propositions to individual student interests through hyperpersonalized marketing.”

    Source link

  • States Should Step Up on Graduate School Aid (opinion)

    States Should Step Up on Graduate School Aid (opinion)

    Two decades ago, Uncle Sam offered a helping hand for college graduates who desired careers that required advanced degrees by establishing a loan program known as Grad PLUS. That hand has now been withdrawn. Also known as Direct PLUS loans, this program allowed students to borrow beyond the $20,500 limit available through direct unsubsidized loans to cover their full cost of attendance. With the One Big Beautiful Bill Act signed into law last summer, Grad PLUS loans will no longer be an option for prospective graduate students after July 2026.

    The question of whether colleges and universities raise their tuition prices as the availability of federal aid increases has been a hotly debated topic for more than four decades, with contradictory findings. One recent study found that institutions increased their tuition prices after the creation of Grad PLUS, and determined that the funds did not increase access (or completion) for graduate education in general or for underrepresented groups in particular. These findings echo previous studies that also support a positive relationship between government aid and college prices. In contrast, other studies and analyses at the undergraduate level, as well as for graduate business, medical and law programs, have found little evidence of nonprofit institutions increasing tuition in relation to government subsidies. (The for-profit sector is another story.)

    In any case, the elimination of Grad PLUS is a new reality that incoming graduate students will have to face. Now, students in master’s and doctoral programs will only be able to borrow up to $20,500 annually (with a maximum of $100,000). Students in professional degree programs, like law and medicine, will have a higher cap of $50,000 annually (up to $200,000 total). Additionally, the maximum amount students can borrow from the federal government for their undergraduate and graduate studies combined is $257,500. Students who borrow beyond any of these limits annually will have to turn to private loans to finance the remaining costs, which are less accessible for low-income students (who have less credit) and often come with higher interest rates.

    The specific impact of these new limits on students is not yet known, but if we look at data for borrowers from previous years, we see potential impacts. In 2019–20, approximately 38 percent of all graduate borrowers borrowed beyond these caps, according to an analysis by Jobs for the Future. When disaggregated by degree type, 41 percent of graduate borrowers pursuing master’s degrees, 37 percent pursuing Ph.D. degrees and 25 percent pursuing professional degrees borrowed beyond the loan caps set by OBBBA.

    A recent analysis published by American University’s Postsecondary Education & Economics Research Center shows potential impacts not just by graduate degree type but also by specific field of study. For professional degrees (with the higher loan cap), more than half of borrowers for chiropractic, medicine, osteopathy and dentistry programs borrowed more than $200,000 for their degrees in recent years. Among the master’s programs reviewed, half or more of borrowers in programs including audiology/speech pathology, public health, nursing and school and mental health counseling, to name a few, borrowed beyond the new limits.

    Based on these analyses, it is clear that many prospective graduate students will be impacted by the new loan caps, at least in the short term. The rationale for these loan caps is that graduate programs will lower their costs to make graduate education more affordable, although it is doubtful that colleges will decrease the costs of graduate programs within just a year. It should be noted that many students do not borrow at all to obtain their degrees. In 2019–20, approximately 40 percent of full-time domestic students enrolled in master’s degrees did not borrow.

    For programs that attract students from high-income backgrounds (usually selective elite institutions), what incentive is there to decrease costs if enough students can pay out of pocket? For instance, between 2014 and 2019, medical school matriculants from high-income backgrounds (over $200,000) increased substantially. The number of students attending law schools from wealthy backgrounds has also increased in the past couple of decades, particularly at selective elite institutions. Graduate education, at least at elite schools, has become less accessible for many low-income students.

    Without financial support, options for low-income students will become even more limited. These students will largely be relegated to less selective public universities, and the more elite private schools will become even less economically diverse than they already are. Financial aid offices will become the de facto second admissions office. Using Massachusetts as an example, our analysis found that the annual cost of attendance exceeded the annual loan limit of $50,000 in the case of every accredited law and medical school in the state, with the gap between the cost of attendance and the limit ranging from about $5,600 in the case of the lone public law school (the University of Massachusetts at Dartmouth), and $33,000 in the case of the only public medical school option (University of Massachusetts Chan), to as high as $71,000 for Harvard Law School and $64,000 for Harvard Medical School.

    Law School (J.D.) Institution Type 2025 Estimated Cost of Attendance Annual COA Above/ Below Cap
    Boston College Private, nonprofit $99,991 $49,991
    Boston University Private, nonprofit $92,914 $42,914
    Harvard University Private, nonprofit $121,250 $71,250
    New England Law Private, nonprofit $113,279 $63,279
    Northeastern University Private, nonprofit $88,926 $38,926
    Suffolk University Private, nonprofit $96,190 $46,190
    Western New England University Private, nonprofit $74,176 $24,176
    University of Massachusetts Dartmouth Public $55,648 (in-state) $5,648
    Amounts calculated based on current advertised rates for first-time (entering), full-time students enrolled in daytime, nine-month and on-campus programs.
    Medical School (M.D.) Institution Type 2025 Cost of Attendance Annual COA Above/Below Cap
    Boston University Private, nonprofit $100,927 $50,927
    Harvard University Private, nonprofit $113,746 $63,746
    Tufts University Private, nonprofit $99,884 $49,884
    University of Massachusetts Chan Public $83,247 (in-state) $33,247
    Amounts calculated based on advertised rates for first-time (entering), full-time students enrolled in daytime, 10-month and on-campus programs.

    This simple analysis, of course, does not take into account any institutional grants or scholarships students may be awarded, but those funds vary by institutional budgets.

    What happens when a deserving medical school applicant gains admission and a financial aid offer, only to realize that they still have a balance of $40,000 after institutional and federal aid is applied? For students to turn to private lenders, they will likely need either good credit and a substantial income or a cosigner, which may not be an option for many students from underresourced backgrounds. Almost 93 percent of private student loans given last year had a cosigner. Almost 51 percent of individuals from low/moderate incomes have limited or poor to fair credit. Even if they are lucky to be offered loans, the interest rates will likely be much higher.

    With Washington Out, States May Have to Intervene

    With the recent federal cuts to Medicaid likely to lead to decreases in state funding for postsecondary education, states may be hesitant to award funds to support students pursuing graduate education—but there are frameworks to help states determine which graduate programs deserve state funding and which type of funding to provide students. Third Way recently produced a framework that categorizes programs by personal return on investment and social value. One possible solution would be to offer accessible loans and state subsidies based on how a state places certain programs in this model.

    For programs that lead to high ROI and social value—for example, dentistry—states that are facing a shortage of dentists could offer accessible (and lower than market rate) loans in exchange for working in certain geographic areas in that state. Providing low-interest loans instead of grants would make sense for this category because dentists are more likely to have high enough earnings (postresidency) that they can repay their loans. Certain localities have set up zero-interest loans for students pursuing specific industries, such as a San Diego County program for aspiring behavioral health professionals (a type of pay-it-forward program).

    Some states, such as Pennsylvania, do have loan repayment programs for certain health occupations in exchange for working in specific areas of their states. Offering this solution without providing accessible loans will only benefit students who come from wealthier families, as they are more likely to have good enough credit or relationships with creditworthy cosigners to access private loans in the first place.

    For programs that are high in social value but low in personal ROI, such as teaching or social work, if a state determines this is an area of need, they can offer grants to lower the cost of attending these programs and minimize the amount of loans students will have to take out, in exchange for service in these fields for a specific period of time. Offering accessible, low-interest loans to students pursuing these careers could still be an option, but should be secondary, or supplemental to, grants.

    In line with recommendations from a jointly authored report from the American Enterprise Institute, EducationCounsel and the Century Foundation, states can offer grants to graduate students who demonstrate financial need, in addition to targeted grant aid for certain programs. Already, certain states, such as Maryland, New Mexico, Virginia and Washington, offer grant aid to graduate students in specific fields or based on financial need. Massachusetts also offers a tuition waiver to incentivize students to enroll in graduate programs at its public universities.

    Unfortunately, I was unable to find a single repository of state aid specifically for graduate students from various states. The closest I could find was a report released by the National Association of State Student Grant and Aid Programs for the 2023–24 academic year with data on state-funded expenditures for both undergraduate and graduate student aid. The report shows that only a handful of states allocated more than a million dollars to need-based graduate aid (Arizona, Colorado, Maryland, Minnesota, New Jersey, Texas and Virginia), but does not specify for which programs, nor does it detail how aid is awarded and to which institutions.

    The Education Finance Council also maintains a list of nonprofit loan providers in different states that offer lower-interest or more accessible loans, many of which are state-administered, such as the Massachusetts Educational Financing Authority. States that already administer conditional loans, scholarships, grants or loan forgiveness programs at the undergraduate level should consider expanding these programs to high- demand industries that require postbaccalaureate credentials if they have not already.

    What Can Institutions Do?

    Institutions are the closest to students, and they can play a role as well. Beyond offering need-based grants/scholarships to lower the cost of attendance, institutions can also guide students in the lending process, such as by publishing preferred lenders on financial aid websites. These lenders should have a good reputation with borrowers and offer low interest rates. Examples of institutions that advertise preferred lenders include Baylor University, the University of Iowa and the University of Central Missouri.

    Institutions with more financial resources can either directly partner with lenders to offer lower fixed interest rates through risk sharing or provide loans themselves. Harvard Law School makes loans available to graduate students through a partnership with the Harvard Federal Credit Union. Some private loan providers looking to get into the graduate lending space are now in conversations with institutions about developing new risk-sharing models.

    Many occupations that typically require graduate degrees, such as teaching, nursing and medicine, will face steep shortages in the coming years. States should align aid programs with current and future workforce shortages, determine which graduate programs will exceed federal loan caps and by how much, offer targeted grants for high-social-value but low-earning fields where costs exceed caps, and provide below-market or zero-interest (and accessible) loans for high-social-value, high-earning fields.

    Institutions must act urgently by partnering with accessible, ethical lenders; increasing need-based aid for students who need it most; and protecting students from predatory options. At the very least, institutions can advertise the upcoming student loan changes on their websites. With OBBBA loan caps, Washington is stepping back. Will states and institutions be able to step forward and lead the way in preserving access and promoting economic mobility? Only 2026 will tell.

    Josh Farris is research and policy specialist and Derrick Young Jr. is cofounder and executive director at Leadership Brainery, a nonprofit organization focused on improving access to graduate education for students from limited-access backgrounds.

    Source link

  • Penn Graduate Students (GET-UP) Authorize Strike as Contract Talks Falter

    Penn Graduate Students (GET-UP) Authorize Strike as Contract Talks Falter

    Graduate student workers at Penn have overwhelmingly authorized a strike — a decisive move in their fight for fair pay, stronger benefits, and comprehensive protections. The vote reflects not only deep frustration with stalled negotiations but also the growing momentum of graduate-worker organizing nationwide.

    A year of bargaining — and growing frustration

    Since winning union recognition in May 2024, GET‑UP has spent over a year negotiating with Penn administrators on their first collective-bargaining agreement. Despite 35 bargaining sessions and tentative agreements on several non-economic issues, key demands — especially around compensation, benefits, and protections for international students — remain unmet.

    Many observers see the strike authorization as long overdue. “After repeated delays and insulting offers, this was the only way to signal our seriousness,” said a member of the bargaining committee. Support for the strike among graduate workers is overwhelmingly strong, reflecting a shared determination to secure livable wages and protections commensurate with the vital labor they provide.

    Strike authorization: a powerful tool

    From Nov. 18–20, GET‑UP conducted a secret-ballot vote open to roughly 3,400 eligible graduate employees. About two-thirds voted, and 92% of votes cast authorized a strike, giving the union discretion to halt academic work at a moment’s notice.

    Striking graduate workers, many of whom serve as teaching or research assistants, would withhold all academic labor — including teaching, grading, and research — until a contract with acceptable terms is reached. Penn has drafted “continuity plans” for instruction in the event of a strike, which union organizers have criticized as strikebreaking.

    Demands: beyond a stipend increase

    GET‑UP’s contract demands include:

    • A living wage for graduate workers

    • Expanded benefits: health, vision, dental, dependent coverage

    • Childcare support and retirement contributions

    • Protections for international and immigrant students

    • Strong anti-discrimination, harassment, and inclusive-pronoun / gender-neutral restroom protections

    While Penn has agreed to some non-economic protections, many critical provisions remain unresolved. The stakes are high: graduate workers form the backbone of research and teaching at the university, yet many struggle to survive on modest stipends.

    Context: a national wave of UAW wins

    Penn’s graduate workers are part of a broader wave of successful organizing by the United Auto Workers (UAW) and allied graduate unions. Recent years have seen UAW-affiliated graduate-worker locals achieve significant victories at institutions including Cornell, Columbia, Harvard, Northwestern, and across the University of California (UC) system.

    At UC, a massive systemwide strike in 2022–2023 involving tens of thousands of Graduate Student Researchers (GSRs) and Academic Student Employees (ASEs) secured three-year contracts with major gains:

    • Wage increases of 55–80% over prior levels, establishing a livable baseline salary.

    • Expanded health and dependent coverage, childcare subsidies, paid family leave, and fee remission.

    • Stronger protections against harassment, improved disability accommodations, and support for international student workers.

    • Consolidation of bargaining units across ASEs and GSRs, strengthening long-term collective power.

    These gains demonstrate that even large, resource-rich institutions can be compelled to recognize graduate labor as essential, and to provide fair compensation and protections. They also show that coordinated, determined action — including strike authorization — can yield significant, lasting change.

    What’s next

    With strike authorization in hand, GET‑UP holds a powerful bargaining tool. While a strike remains a last resort, the overwhelming support among members signals that the union is prepared to act decisively to secure a fair contract. The UC precedent, along with wins at other UAW graduate-worker locals, suggests that Penn could follow the same path, translating student-worker momentum into meaningful, tangible improvements.

    The outcome could have major implications not just for Penn, but for graduate-worker organizing across the country — reinforcing that organized graduate labor is increasingly a central force in higher education.


    Sources

    Source link

  • Why ideas of graduate success need to catch up with portfolio careers

    Why ideas of graduate success need to catch up with portfolio careers

    For many graduates in the creative industries, the question “what do you do?” has never had a simple answer.

    A graduate might be holding down part-time work in a gallery, freelancing in digital design, tutoring on the side, stage managing in the summer, and selling their own work online. It’s a patchwork, a blend, a portfolio.

    And yet when we measure their success through Graduate Outcomes, the official data collection exercise on graduate employment, they’re told to tick a single box. The reality of hybridity is flattened into the illusion of underemployment.

    This is not a trivial issue. Policymakers rely on Graduate Outcomes (and reports based on the collection, like this year’s What do graduates do? out today) to make judgements about which subjects, courses and institutions are “succeeding” in employability terms. Yet in the creative arts, where portfolio working is both the norm and, in many ways, a strength, these categories misrepresent lived reality. The result is a story told back to government, employers and students in which creative graduates appear more precarious, less stable, and less successful than they often are.

    Portfolio careers are current and they’re the future

    The creative economy has been pointing towards this future for years. In What Do Graduates Do? , the creative arts overview that Elli Whitefoot and I authored, we found repeated evidence of graduates combining multiple sources of income, employment, freelancing, self-employment, often in ways that nurtured both security and creativity. The forthcoming 2025 overview by Burtin and Halfin reinforces the same point: hybridity is a structural feature, not a marginal quirk.

    This hybridity is not inherently negative. Portfolio work can provide resilience, satisfaction and autonomy. As Sharland and Slesser argued in 2024, the future workforce needs creative thinkers who can move across boundaries. Portfolio careers develop precisely those capabilities. At the Advance HE Symposium earlier this year, I led a workshop on future-proofing creative graduates through AI, entrepreneurship and digital skills, all of which thrive in a portfolio setting.

    Policy writers and senior leaders need to wake up quickly to realise that creative graduates are early adopters of what more of the labour market is beginning to look like. Academic staff, for example, increasingly combine research grants, teaching roles, consultancy and side projects. Tech and green industries are also normalising project-based work, short-term contracts and hybrid roles. In other words, the creative industries are not an outlier; they are a preview.

    Why measurement matters

    If the data system is misaligned with reality, the consequences are serious. Universities risk being penalised in performance frameworks like TEF or in media rankings if their graduates’ outcomes are deemed “poor.” Students risk being discouraged from pursuing creative courses because outcomes data suggests they are less employable. Policymakers risk designing interventions based on a caricature rather than the real graduate experience.

    As Conroy and Firth highlight, employability education must learn from the present, and the present is messy, hybrid, and global. Yet our data systems remain stuck in a single-job paradigm.

    The wider sector context is equally pressing. Graduate vacancies have collapsed from around 180,000 in 2023 to just 55,000 this year, according to Reed. Almost seven in ten undergraduates are now working during term-time just to keep going according to the latest student academic experience survey. And international graduates face higher unemployment rates, around 11 per cent, compared with 3 per cent for UK PGT graduates. The labour market picture is not just challenging, it is distorted when portfolio working is coded as failure.

    Without intervention, this issue will persist. Not because creative graduates are difficult to track, but because our measurement tools are still based on outdated assumptions. It is therefore encouraging that HESA is taking steps to improve the Graduate Outcomes survey questionnaire through its cognitive testing exercise. I am currently working with HESA and Jisc to explore how we can better capture hybrid and portfolio careers. These efforts will help bridge the gap in understanding, but far more nuanced data is needed if we are to fully represent the complex and evolving realities of creative graduates.

    So what should change?

    Data collection needs to become more granular, capturing the combination of employment, self-employment, freelancing and further study rather than forcing graduates into a false hierarchy. Recognising hybridity would make Graduate Outcomes a more accurate reflection of real graduate lives.

    One complicating factor is that students who do not complete a creative programme, for example, those who transfer courses or graduate from non-creative disciplines but sustain a creative portfolio, are even less likely to record or recognise that work within Graduate Outcomes. Because it isn’t linked to their area of study, they rarely see it as a legitimate graduate destination, and valuable evidence of creative contribution goes uncounted.

    We also need to value more than salary. The “graduate premium” may be shrinking in monetary terms, but its non-monetary returns, civic participation, wellbeing, and resilience, are expanding. Research from Firth and Gratrick in BERA Bites identifies clear gaps in how universities support learners to develop and articulate these broader forms of employability.

    Evidence must also become richer and longer-term. The work of Prospects Luminate, AGCAS CITG and the Policy and Evidence Centre on skills mismatches shows that snapshot surveys are no longer sufficient. Graduates’ careers unfold over years, not months, and portfolio working often evolves into sustainable, fulfilling trajectories.

    Beyond the UK there are instructive examples of how others have rethought the link between learning and employability. None offers a perfect model for capturing the complexity of graduate working lives, but together they point the way. The Netherlands Validation of Prior Learning system recognises skills gained from outside formal education, Canada’s ELMLP platform connects education and earnings data to map real career pathways, and Denmarks register-based labour statistics explicitly track people holding more than one job. If the UK continues to rely on outdated, single-job measures, it risks being left behind.

    Beyond the creative industries

    This is not an argument limited to art schools or design faculties. The wider labour market is moving in the same direction. Skills-based hiring is on the rise, with employers in AI and green sectors already downplaying traditional degree requirements in favour of demonstrable competencies. Academic precarity is, in effect, a form of portfolio career. The idea of a single linear graduate role is increasingly a historical fiction.

    In this context, the creative industries offer higher education a lesson. They have been navigating portfolio realities for decades. Rather than treating this as a problem to be solved, policymakers could treat it as a model to be understood.

    The full beauty of graduate success

    When we collapse a graduate’s career into a single tick-box, we erase the full beauty of what they are building. We turn resilience into precarity, adaptability into instability, creativity into failure.

    If higher education is serious about employability, we need to update our measures to reflect reality. That means capturing hybridity, valuing breadth as well as salary, and designing policy that starts with the lived experiences of graduates rather than the convenience of categories.

    Portfolio careers are not the exception. They are the shape of things to come. And higher education, if it is to remain relevant, must learn how to see them clearly.

    Source link

  • Can there ever be a definitive graduate premium?

    Can there ever be a definitive graduate premium?

    The idea of a graduate premium is a central plank of the way the Westminster government justifies the level of tuition fees, the existence of maintenance loans, and the design of an increasingly punishing repayment system based on earnings.

    In essence we tell applicants that they will earn more on average, so they will pay more for the privilege of study.

    One policy question that urgently needs attention is whether the graduate premium in an expanding and diverse system is equal to the task of supporting increasingly onerous repayments – and how much (or how little) of this debt needs to be waived because of low graduate salaries in certain industries.

    We should not fall into the trap of equating low salaries with the “worth” of undergraduate study: however poorly we pay them we need the army of graduates that run the public sector, and even the industrial strategy admits that without the (infamously low pay) creative industries we may as well pack up the idea of civilisation and go home.

    But we do need to think about whether the system as a whole stacks up in periods like we have been living through – low wage growth overall and high interest rates. And at this point the graduate repayment (annual earnings) threshold isn’t far off the annualised minimum wage.

    The minimum

    The national minimum wage, since 1999, has set hourly lower limits on pay at various age points.

    Compliance is high among employers (though not complete: ONS estimates around 447,000 or 1.5 per cent of all jobs held by those aged 16 or over were paid below the relevant minimum wage). It has raised earnings among the very lowest paid in society.

    It has probably been the single most transformative means of addressing poverty in recent times: in most years since the minimum has risen beyond inflation – in real terms the value of the higher rate has increased by 77 per cent since it was introduced.

    [Full screen]

    Over a period where wages more generally have largely stagnated in real terms this is a remarkable uplift – and it is to the credit of governments of all stripes that this policy of direct and tangible improvements to low pay has continued through multiple economic downturns.

    But is it possible that a large increase in the earnings of the lowest decile will have an impact on the way we understand the earnings benefits that a degree could bring?

    Certainly if we plot the minimum wage against income percentiles (these are gross figures, at 2016 prices) it is notable how close its value has crept to the tenth percentile of income, suggesting that earnings at the lower end of the spectrum are now bunching at a higher real-terms level.

    [Full screen]

    The question has to be what, if any, impact this has on the graduate earnings premium and thus repayments.

    Low earners and graduates

    Currently around 10 per cent of those in employment are paid an hourly wage equivalent to the national minimum wage. If this rate of pay was linked to a full time role (eight hours a day for each of the 253 annual working days in England) it would make for annual earnings of around £24,700.

    However, workers on a low hourly wage are more likely to be on part-time hours, while we also know that the likelihood of you holding a full time job increases in line with the highest qualification you hold.

    [Full screen]

    The jobs involved are more likely to be elementary roles. In the main, jobs like this are primarily held by those with lower level qualifications, or no qualifications at all.

    [Full screen]

    Conversely, jobs done by graduates are far more likely to be full time, and are more likely to be managerial, professional, and associated professional roles – what the Office for Students calls “graduate jobs” than those with other levels of qualification. Around 60 per cent of graduates are in these roles, compared with around 27 per cent of those with level 3 qualifications (two A levels, so enough to have the option to attend some kind of higher education).

    Strikingly, the number (not the proportion) of graduates in “non-graduate” jobs is broadly similar to the number of those qualified to level 3 with “non-graduate” jobs.

    LEO and the minimum wage

    Instinctually, you’d expect a graduate to be earning comfortably above what is set at a national minimum for reasons of avoiding worker poverty. For this reason, it is fair to assume that gross earnings below the minimum wage relate to part-time work. The canonical failing of LEO is that it doesn’t differentiate between part-time and full-time work, but from the Census (so, 2020–21 issues apply to a certain extent) we know that graduates are less likely to be in part-time work (and more likely to be working at all) than all other groups.

    However, there are industry-based differences, and it is reasonable to assume that subject-based differences between earnings are derived from these. To give one obvious example, part-time work is a huge deal in creative and performing arts – so a lower than expected graduate salary in subjects like these would suggest that graduates are participating (at low/no pay) in the industry they have trained for and supporting this with part-time work.

    With this caveat in mind, I have plotted LEO earnings against income percentiles for the whole working population and the value of the national minimum wage, all indexed to 2016 prices. The available LEO data extends from 2016 through to 2022, and in the latter year salaries across the economy experienced a real-terms downturn – something which (as we see from the chart above) has been cancelled out over the past few years.

    [Full screen]

    The two filters allow you to choose a subject area of interest, and to look at graduate gross earnings 1,3,5, and 10 years after graduation for each tax year.

    The median gross earnings of graduates is slightly above the median gross earnings of all earners (all ages, all levels of qualification) after ten years – though there is substantial industry-driven variation by subject. After one year (so comparing the gross earnings of 21-22 year olds with national averages) graduate earnings are around the lower quartile – and the intervening years see the difference between the two gradually bridged.

    Recall here that graduates are included within the percentile values – we are not looking here at a premium over non-graduates but a premium when compared to all earners. At the end of the day graduates are probably more concerned with the buying power of their own earnings than whether they are doing better than non-graduates.

    And, given how close the minimum wage is to the repayment threshold, looking at the premium over the minimum wage  (in cash term) is probably a more reasonable thing to do than I would have thought back at the birth of LEO.

    We know prior attainment is one indicator of future salary (mostly as an indicator of deprivation more generally) so hear is a visualisation that plots LEO by prior attainment against the annualised minimum wage.

    [Full screen]

    How earnings are annualised in LEO

    The temptation with LEO is to read the figures as salaries, and to be fair the presentation of the data does everything it possibly can to encourage that reading. But inside the sausage machine, things are very different.

    The medians and quartiles familiar to us are based on individual graduate tax records for pay as you earn (PAYE, usually used by people in employment) and self assessment (SA, usually used by freelancers and the self-employed).

    With PAYE, earnings for a given tax year are divided by the number of days of employment recorded, to give an average daily wage. This is then multiplied by the number of working days in a tax year (which would appear to be different across the UK due to differing numbers of bank holidays: so 253 in England, 252 in Scotland, and 251 in Northern Ireland) to give annualised earnings.

    Because SA doesn’t offer dates of employment, LEO just uses the raw earnings. Annualised PAYE and raw SA income are then added together to give the final figure for each graduate, which are then used to produce the median and quartile data that is published.

    Another way

    I chanced upon some Labour Force Statistics data which neatly cuts across this issue by using gross hourly pay (and as luck would have it, broken down by NUTS3 regions over a number of years) as a measure of earnings. Big thanks to the ONS team for answering my questions on this one, and offering me information on the numbers in each group and an extra year of data.

    Now, LFS isn’t half as good as administrative data – it is a large, representative, survey of UK residents which has been dogged by low response rates in recent years – but it was, at the time, official statistics and thus is worth taking reasonably seriously. We do get two big benefits – the first is with hourly earnings we can confirm like with like, rather than needing to compensate for differing patterns of work; while the second is we get some regional data.

    A note of caution on that latter one – I’d be looking at the UK wide figures more closely as the NUTS3 regions (roughly equivalent to a top level local authority) may have quite low numbers of workers in each group (see the tooltips).

    [Full screen]

    What jumps out at me here is a clear and substantial wage premium for being a graduate, both nationally and in pretty much any area of the country. This largely holds against any qualification group of comparators, against average hourly earnings for everyone, and (very much) against the national minimum wage for the year in question. If you include loan repayments (take nine per cent off the hourly gross) there are a handful of areas of the UK where graduates are paid less than those with level three qualifications – and these largely map to other measures of deprivation.

    You would expect a result like that given what we know about the impact of place on income and the tendency among graduates to move to maximise opportunities and earnings. But even so, national premiums do hold up and appear to be broadly stable or growing since 2018. You can see the impact of the pandemic here – where graduate earnings overall remained stronger during 2020 and 2021.

    I should note here again that if you compare graduates with all earners, you are including the graduates themselves on both sides of the equation.

    Reasons to be GLMS

    Now you are probably ahead of me here, but the government used to do a graduate focused look at labour force survey data – imaginatively enough, called “graduate labour market statistics” (GLMS). I say “used to” because the 2024 iteration (released in summer 2025) is to be the last one ever. There’s an open consultation (follow the link) if you have thoughts on that – but you need to hurry, as responses are requested by the start of next month.

    The ostensible reason for discontinuing GLMS is the problems faced by LFS – the falling number of responses leading to issues with sample variability. Since 2024 it has been badged as “official statistics in development” (meaning that testing of quality, volatility, and an ability to meet user needs is underway), while improvements have been made that affect data throughout 2023 and 2024. From 2025 these improvements are fully in effect, and from 2026 a new “transformed labour force survey” (TLFS) will be the means by which ONS generates its whole suite of employment data.

    GLMS has clearly had some recent issues (although to be clear, these issues have not had a meaningful impact on the published national level data) but the data above suggests that it does have the potential (with appropriate caveats) to provide a more nuanced look at qualification level and regional data. Certainly, comparing the graduate population with those who hold at least the two A levels or equivalent that could get them into higher education feels like a simple and meaningful comparison we could learn from.

    A transformed LEO?

    If we are interested in graduate earnings premiums, the most useful thing that could be included in future LEO releases is hourly earnings. This would neatly address the part-time work issue, and focus directly on earning power rather than working patterns (which may vary for a number of reasons).

    Of course, earnings are only one part of the benefit of being a graduate – and for some (I’m looking at my creative peers here) the ability to make enough money to live on by doing the thing they love is probably going to be a bigger incentive than the ability to earn more than their neighbour. That’s not to say the salary data isn’t important for them to see, but telling me that I won’t earn much as a musician is not going to stop me from wanting to study music.

    That said, it does appear that (over the last few years at least) median graduate earnings have remained stable (or grown slightly) in real terms when compared to a given percentile of income tax payers. This isn’t a fair comparison – in that LEO data includes non-taxpayers and this particular HMRC data does not, but as a benchmarking tool it is interesting. By default I’m showing all but the top 10 percentiles of taxpayer income, alongside LEO by subject, and the minimum wage (all at 2016 prices).

    [Full screen]

    We know in LEO that a number of factors influence earnings: provider and subject (yes), but also prior disadvantage (of which prior attainment is one visible metric), sex, industry of employment (an economist will earn more in a bank than in a university), and region of employment. And if you control for all of these factors you are not going to get big enough groups to make statistically valid observations.

    All of which is a rather maths-heavy way of saying that past performance does not tell us a great deal about the future career prospects and earnings of a single applicant chosen at random. Looking at very broad, national, figures suggests to me that a boost in earning power (which grows throughout your career) is available for three years of study – but I would caveat that by saying if your sole interest in higher study is to increase your earning power then there are other metrics available that could help you maximise this particular benefit.

    Source link

  • International Graduate Student Enrollment Drops

    International Graduate Student Enrollment Drops

    Photo illustration by Justin Morrison/Inside Higher Ed | skynesher/E+/Getty Images

    Federal actions to limit immigration have affected many international students’ decision to enroll at U.S. colleges and universities this fall, with several institutions reporting dramatic declines in international student enrollment.

    New data from the Department of Homeland Security from the Student Exchange and Visitor Information System for October shows an overall 1 percent decline of all international students in the U.S. SEVIS data includes all students on F-1 and M-1 visas, including those enrolled in primary and secondary school, language training, flight school, and other vocational programs.

    According to DHS data, bachelor’s degree enrollment among international students is down 1 percent from October 2024 to October 2025; master’s degree enrollment is down 2 percent, as well. Associate degree programs have 7 percent more international students in October 2025 than the year prior, and international doctoral students are up 2 percent.

    Campus-level data paints a more dramatic picture; an Inside Higher Ed analysis of self-reported graduate international student enrollment numbers from nine colleges and universities finds an average year-over-year decline of 29 percent.

    Some groups, including NAFSA, the association for international educators, have published predictions of how international student enrollment would impact colleges’ enrollment and financial health. NAFSA expected to see a 15 percent decline across the sector and greater drops for master’s degree programs.

    “Master’s [programs] have been very hit. And in addition to master’s being hit, programs like computer sciences and STEM in particular have been mostly affected,” NAFSA CEO Fanta Aw said in a Sept. 19 interview with Inside Higher Ed.

    At the University of Wisconsin at Madison, for example, master’s degree enrollment dropped 22 percent from fall 2024. Ph.D. program enrollment declined only 1 percent compared to the year prior, according to university data.

    While more selective or elite institutions have mostly weathered enrollment declines among undergraduate international students—reporting little or no change to their enrollment numbers this fall—Aw says graduate student enrollment is down everywhere.

    The University of Pennsylvania’s Wharton School of Business, for example, reported that international students made up 26 percent of its incoming master’s in business administration class, down five percentage points from the year prior, as reported by Poets and Quants (Poets and Quants is also owned by Times Higher Education, Inside Higher Ed’s parent company). At Duke’s Fuqua School of Business, 47 percent of the incoming class in 2024 hailed from other nations, but that figure dropped to 38 percent this fall.

    Because master’s degrees are shorter programs than undergraduate ones, averaging two years, Aw anticipates universities to see even more dramatic declines from 2024 in fall 2026.

    “The current environment is still too uncertain for [graduate] students to even consider potentially applying,” Aw said. “You cannot have enrollment if they’re not even applying.”

    Of colleges in the data set, Northwest Missouri State University reported the greatest year-over-year decline in graduate student enrollment, falling from 557 international students in fall 2024 to 125 in fall 2025. In April, Northwest Missouri State reported that 43 of its international students had their SEVIS statuses revoked; 38 of them were on optional practical training.

    At that time, Northwest Missouri State encouraged students who lost their SEVIS status to depart the U.S. immediately “to avoid accruing unlawful presence,” according to a memo from President Lance Tatum published by Fox 4 Kansas City. The university declined to comment for this piece.

    Nationwide, international students make up 22 percent of all full-time graduate students, according to Integrated Postsecondary Education Data System data. International students often pay higher tuition rates compared to their domestic peers, and some colleges rely on international students to boost graduate program enrollment.

    The dramatic changes in enrollment numbers are having budgetary impacts on some colleges.

    At Georgetown University, foreign graduate student enrollment dropped 20 percent, which was expected but steeper than anticipated, according to a memo from interim university president Robert M. Groves. In April, Georgetown cut $100 million from its budget due to loss of federal research dollars and international student revenue, and Groves said more cuts may be needed in December.

    DePaul University in Chicago saw a 63 percent year-over-year decline in new graduate students from other nations—a sharp drop that administrators, similarly, did not anticipate in this year’s budget.

    As more colleges solidify their fall enrollment numbers, the sectorwide decline in foreign students has become more clear.

    Inside Higher Ed’s initial data found colleges reported, on average, a 13 percent decrease in international student enrollment. The median year-over-year change was a 9 percent drop.

    Small colleges saw significant changes. Bethany Lutheran College in Minnesota, with a total head count of 900 students, reported a 50 percent growth in international students. At the other end, the University of Hartford in Connecticut lost half of its international students, only expecting 50 instead of 100 this fall.

    Community colleges are also feeling the loss of international students. Bellevue College in Washington State, a leading destination for international students in the two-year sector, reported a 56 percent year-over-year decline in enrollment.

    Southeast Missouri State reported a 63 percent decline in international students, with 494 individuals unable to secure visas, according to a university statement.

    Source link

  • How generative AI could re-shape professional services and graduate careers

    How generative AI could re-shape professional services and graduate careers

    Join HEPI and the University of Southampton for a webinar on Monday 10 November 2025 from 11am to 12pm to mark the launch of a new collection of essays, AI and the Future of Universities. Sign up now to hear our speakers explore the collection’s key themes and the urgent questions surrounding AI’s impact on higher education.

    This blog was kindly authored by Richard Brown, Associate Fellow at the University of London’s School of Advanced Study.

    Universities are on the front line of a new technological revolution. Generative AI (genAI) use (mainly large language mode-based chatbots like ChaptGPT and Claude) is almost universal among students. Plagiarism and accuracy are continuing challenges, and universities are considering how learning and assessment can respond positively to the daunting but uneven capabilities of these new technologies.

    How genAI is transforming professional services

    The world of work that students face after graduation is also being transformed. While it is unclear how much of the current slowdown in graduate recruitment can be attributed to current AI use, or uncertainty about its long-term impacts, it is likely that graduate careers will see great change as the technology develops. Surveys by McKinsey indicate that adoption of AI spread fastest between 2023/24 in media, communications, business, legal and professional services – the sectors with the highest proportions of graduates in their workforce (around 80 per cent in London and 60 per cent in the rest of the UK).

    ‘Human-centric’, a new report from the University of London looks at how AI is being adopted by professional service firms, and at what this might mean for the future shape and delivery of higher education.

    The report identifies how AI is being adopted both through grassroots initiatives and corporate action. In some firms, genAI is still the preserve of ‘secret cyborgs’ –  individual workers using chatbots under the radar. In others, task forces of younger workers have been deployed to find new uses for the tech to tackle chronic workflow problems or develop new services. Lawyers and accountants are codifying expertise into proprietary knowledge bases. These are private chatbots that minimise the risks of falsehood that still plague open systems, and offer potential to extend cheap professional-grade advice to many more people.

    Graduate careers re-thought

    What does this mean for graduate employment and skills? Many of the routine tasks frequently allocated to graduates can be automated through AI. This could be a doubled-edged sword. On the one hand, genAI may open up more varied and engaging ways for graduates to develop their skills, including the applied client-facing and problem-solving capabilities that  underpin professional practice.

    On the other hand, employers may question whether they need to employ as many graduates. Some of our interviewees talked of the potential for the ‘triangle’ structure of mass graduate recruitment being replaced by a ‘diamond-shaped’ refocus on mid-career hires. The obvious problem with this approach – of where mid-career hires will come from if there is no graduate recruitment – means that graduate recruitment is unlikely to dry up in the short term, but graduate careers may look very different as the knowledge economy is transformed.

    The agile university in an age of career turbulence

    This will have an impact on universities as well as employers. AI literacy, and the ability to use AI responsibly and authentically, are likely to become baseline expectations – suggesting that this should be core to university teaching and learning. Intriguingly, this is less about traditional computing skills and more about setting AI in context: research shows that software engineers were less in demand in early 2025 than AI ethicists and compliance specialists.

    Broader ‘soft’ skills (what a previous University of London / Demos report called GRASP skills – general, relational, analytic, social and personal) will remain in demand, particularly as critical judgement, empathy and the ability to work as a team remain human-centric specialities. Employers also said that, while deep domain knowledge was still needed to assess and interrogate AI outputs, they were also looking for employees with a broader understanding of issues such as cybersecurity, climate regulation and ESG (Environmental, Social, and Governance), who could work across diverse disciplines and perspectives to create new knowledge and applications.

    The shape of higher education may also need to change. Given the speed of advances in AI, it is likely that most propositions about which skills will be needed in the future may quickly become outdated (including this one). This will call for a more responsive and agile system, which can experiment with new course content and innovative teaching methods, while sustaining the rigour that underpins the value of their degrees and other qualifications.

    As the Lifelong Learning Entitlement is implemented, the relationship between students and universities may also need to become more long-term, rather than an intense three-year affair. Exposure to the world of work will be important too, but this needs to be open to all, not just to those with contacts and social capital.

    Longer term – beyond workplace skills?

    In the longer term, all bets are off, or at least pretty risky. Public concerns (over everything from privacy, to corporate control, to disinformation, to environmental impact) and regulatory pressures may slow the adoption of AI. Or AI may so radically transform our world that workplace skills are no longer such a central concern. Previous predictions of technology unlocking a more leisured world have not been realised, but maybe this time it will be different. If so, universities will not just be preparing students for the workplace, but also helping students to prepare for, shape and flourish in a radically transformed world.

    Source link

  • What might lower response rates mean for Graduate Outcomes data?

    What might lower response rates mean for Graduate Outcomes data?

    The key goal of any administered national survey is for it to be representative.

    That is, the objective is to gather data from a section of the population of interest in a country (a sample), which then enables the production of statistics that accurately reflect the picture among that population. If this is not the case, the statistic from the sample is said to be inaccurate or biased.

    A consistent pattern that has emerged both nationally and internationally in recent decades has been the declining levels of participation in surveys. In the UK, this trend has become particularly evident since the Covid-19 pandemic, leading to concerns regarding the accuracy of statistics reported from a sample.

    A survey

    Much of the focus in the media has been on the falling response rates to the Labour Force Survey and the consequences of this on the ability to publish key economic statistics (hence their temporary suspension). Furthermore, as the recent Office for Statistics Regulation report on the UK statistical system has illustrated, many of our national surveys are experiencing similar issues in relation to response rates.

    Relative to other collections, the Graduate Outcomes survey continues to achieve a high response rate. Among the UK-domiciled population, the response rate was 47 per cent for the 2022-23 cohort (once partial responses are excluded). However, this is six percentage points lower than what we saw in 2018-19.

    We recognise the importance to our users of being able to produce statistics at sub-group level and thus the need for high response rates. For example, the data may be used to support equality of opportunity monitoring, regulatory work and understand course outcomes to inform student choice.

    So, HESA has been exploring ways in which we can improve response rates, such as through strategies to boost online engagement and offering guidance on how the sector can support us in meeting this aim by, for example, outlining best practice in relation to maintaining contact details for graduates.

    We also need, on behalf of everyone who uses Graduate Outcomes data, to think about the potential impact of an ongoing pattern of declining response rates on the accuracy of key survey statistics.

    Setting the context

    To understand why we might see inaccurate estimates in Graduate Outcomes, it’s helpful to take a broader view of survey collection processes.

    It will often be the case that a small proportion of the population will be selected to take part in a survey. For instance, in the Labour Force Survey, the inclusion of residents north of the Caledonian Canal in the sample to be surveyed is based on a telephone directory. This means, of course, that those not in the directory will not form part of the sample. If these individuals have very different labour market outcomes to those that do sit in the directory, their exclusion could mean that estimates from the sample do not accurately reflect the wider population. They would therefore be inaccurate or biased. However, this cause of bias cannot arise in Graduate Outcomes, which is sent to nearly all those who qualify in a particular year.

    Where the Labour Force Survey and Graduate Outcomes are similar is that submitting answers to the questionnaire is optional. So, if the activities in the labour market of those who do choose to take part are distinct from those who do not respond, there is again a risk of the final survey estimates not accurately representing the situation within the wider population.

    Simply increasing response rates will not necessarily reduce the extent of inaccuracy or bias that emerges. For instance, a survey could achieve a response rate of 80 per cent, but if it does not capture any unemployed individuals (even when it is well known that there are unemployed people in the population), the labour market statistics will be less representative than a sample based on a 40 per cent response rate that captures those in and out of work. Indeed, the academic literature also highlights that there is no clear association between response rates and bias.

    It was the potential for bias to arise from non-response that prompted us to commission the Institute for Social and Economic Research back in 2021 to examine whether weighting needed to be applied. Their approach to this was as follows. Firstly, it was recognised that for any given cohort, it is possible that the final sample composition could have been different had the survey been run again (holding all else fixed). The sole cause of this would be a change in the group of graduates who choose not to respond. As Graduate Outcomes invites almost all qualifiers to participate, this variation cannot be due to the sample randomly chosen to be surveyed being different from the outset if the process were to be repeated – as might be the case in other survey collections.

    The consequence of this is that we need to be aware that a repetition of the collection process for any given cohort could lead to different statistics being generated. Prior to weighting, the researchers therefore created intervals – including at provider level – for the key survey estimate (the proportion in highly skilled employment and/or further study) which were highly likely to contain the true (but unknown) value among the wider population. They then evaluated whether weighted estimates sat within these intervals and concluded that if they did, there was zero bias. Indeed, this was what they found in the majority of cases, leading to them stating that there was no evidence of substantial non-response bias in Graduate Outcomes.

    What would be the impact of lower response rates on statistics from Graduate Outcomes?

    We are not the only agency running a survey that has examined this question. Other organisations administering surveys have also explored this matter too. For instance, the Scottish Crime and Justice Survey (SCJS) has historically had a target response rate of 68 per cent (in Graduate Outcomes, our target has been to reach a response rate of 60 per cent for UK-domiciled individuals). In SCJS, this goal was never achieved, leading to a piece of research being conducted to explore what would happen if lower response rates were accepted.

    SCJS relies on face-to-face interviews, with a certain fraction of the non-responding sample being reissued to different interviewers in the latter stages of the collection process to boost response rates. For their analysis, they looked at how estimates would change had they not reissued the survey (which tended to increase response rates by around 8-9 percentage points). They found that choosing not to reissue the survey would not make any material difference to key survey statistics.

    Graduate Outcomes data is collected across four waves from December to November, with each collection period covering approximately 90 days. During this time, individuals have the option to respond either online or by telephone. Using the 2022-23 collection, we generated samples that would lead to response rates of 45 per cent, 40 per cent and 35 per cent among the UK-domiciled population by assuming the survey period was shorter than 90 days. Similar to the methodology for SCJS therefore, we looked at what would have happened to our estimates had we altered the later stages of the collection process.

    From this point, our methodology was similar to that deployed by the Institute for Social and Economic Research. For the full sample we achieved (i.e. based on response rate of 47 per cent), we began by generating intervals at provider level for the proportion in highly skilled employment and/or further study. We then examined whether the statistic observed at a response rate of 45 per cent, 40 per cent and 35 per cent sat within this interval. If it did, our conclusion was there was no material difference in the estimates.

    Among the 271 providers in our dataset, we found that, at a 45 per cent response rate, only one provider had an estimate that fell outside the intervals created based on the full sample. This figure rose to 10 (encompassing 4 per cent of providers) at a 40 per cent response rate and 25 (representing 9 per cent of providers) at a 35 per cent response rate, though there was no particular pattern to the types of providers that emerged (aside from them generally being large establishments).

    What does this mean for Graduate Outcomes users?

    Those who work with Graduate Outcomes data need to understand the potential impact of a continuing trend of lower response rates. While users can be assured that the survey team at HESA are still working hard to achieve high response rates, the key-take away message from our study is that a lower response rate to the Graduate Outcomes survey is unlikely to lead to a material change in the estimates for the proportion in highly skilled employment and/or further study among the bulk of providers.

    The full insight and associated charts can be viewed on the HESA website:
    What impact might lower response rates have had on the latest Graduate Outcomes statistics?

    Read HESA’s latest research releases. If you would like to be kept updated on future publications, please sign-up to our mailing list.

    Source link

  • How rare are colleges that enroll and graduate high shares of Pell Grant students?

    How rare are colleges that enroll and graduate high shares of Pell Grant students?

    This audio is auto-generated. Please let us know if you have feedback.

    When it comes to colleges where Pell Grant recipients are at least 55% likely to graduate, there are not a whole lot throughout the U.S. In fact, nearly half of states — many of them Southern with some of the highest poverty rates in the country — don’t have any at all.

    That’s what Becca Spindel Bassett, higher education professor at the University of Arkansas, discovered in a recent analysis in which she sought to identify and map institutions of higher education that she describes as “equity engines.” 

    These are colleges where at least 34% of the students receive Pell Grants and at least 55% of those Pell Grant recipients earn a bachelor’s degree within six years.

    Out of the 1,584 public and private nonprofit four-year institutions that Bassett studied nationwide, she found only 91 — or less than 6% — that qualified for her “equity engine” distinction

    And they’re all clustered in 26 states, resulting in what Bassett calls a “spatial injustice” for low-income students who live in one of the states without any equity engines or in areas with limited access to such institutions.

    The almost eight dozen existing equity engines represent a diverse range of institutional types, including regional public universities, small Christian colleges and historically Black institutions. 

    As for whether states can invest more in colleges that are close to being equity engines — a key recommendation of Bassett’s study — it all depends.

    “It’s worth noting that over half of Equity Engines are private colleges and universities, so their relationship to the state and dependency on state funding varies,” Bassett said in an email to Higher Ed Dive.

    But improving Pell graduation rates isn’t only a question of funding models, she said. 

    Leaders at aspiring equity engines can learn best practices and approaches from these colleges and should be prepared to enact “organizational learning and change,” Bassett said. However, much is unknown about what enables colleges to become equity engines, including whether it depends on their programs and services or their policy and funding environments. 

    While Bassett’s study doesn’t answer those questions, a forthcoming book will describe how two of the colleges she identified as equity engines were able to achieve their results, she said. 

    Michael Itzkowitz, founder and president of the HEA Group, a higher ed-focused research firm and consultancy, said in an email that identifying colleges with strong graduation rates is a “good first step” because students who earn a degree “typically earn more than those who do not.” 

    However, Itzkowitz, who under former President Barack Obama served as the director of The College Scorecard — an online federal tool with various data on higher education institutions — added that it’s also critical to consider whether graduates are actually better off economically since “not all institutions and degrees are created equal.”

    “Students who earn a credential at one institution may experience wildly different outcomes if they earned the same degree elsewhere,” he said.

    David Hawkins, chief education and policy officer at the National Association for College Admission Counseling, said in an email that colleges would do well to emulate the equity engines Bassett identified, such as the University of Illinois Chicago. Bassett’s study calls the university a “major driver” of bachelor’s degree completion among Pell Grant recipients in the state, noting those students have a 58% six-year graduation rate.

    Among other things, Hawkins said, such institutions deploy a wide range of services — such as evening or online courses for working students, and transportation to campus — that have been proven to help low-income students cross the finish line.

     “From my perspective, the United States will only remain competitive if we can invest in a postsecondary infrastructure that serves all students who seek opportunity through higher education,” Hawkins said.  

    Source link

  • Framework for GenAI in Graduate Career Development (opinion)

    Framework for GenAI in Graduate Career Development (opinion)

    In Plato’s Phaedrus, King Thamus feared writing would make people forgetful and create the appearance of wisdom without true understanding. His concern was not merely about a new tool, but about a technology that would fundamentally transform how humans think, remember and communicate. Today, we face similar anxieties about generative AI. Like writing before it, generative AI is not just a tool but a transformative technology reshaping how we think, write and work.

    This transformation is particularly consequential in graduate education, where students develop professional competencies while managing competing demands, research deadlines, teaching responsibilities, caregiving obligations and often financial pressures. Generative AI’s appeal is clear; it promises to accelerate tasks that compete for limited time and cognitive resources. Graduate students report using ChatGPT and similar tools for professional development tasks, such as drafting cover letters, preparing for interviews and exploring career options, often without institutional guidance on effective and ethical use.

    Most AI policies focus on coursework and academic integrity; professional development contexts remain largely unaddressed. Faculty and career advisers need practical strategies for guiding students to use generative AI critically and effectively. This article proposes a four-stage framework—explore, build, connect, refine—for guiding students’ generative AI use in professional development.

    Professional Development in the AI Era

    Over the past decade, graduate education has invested significantly in career readiness through dedicated offices, individual development plans and co-curricular programming—for example, the Council of Graduate Schools’ PhD Career Pathways initiative involved 75 U.S. doctoral institutions building data-informed professional development, and the Graduate Career Consortium, representing graduate-focused career staff, grew from roughly 220 members in 2014 to 500-plus members across about 220 institutions by 2022.

    These investments reflect recognition that Ph.D. and master’s students pursue diverse career paths, with fewer than half of STEM Ph.D.s entering tenure-track positions immediately after graduation; the figure for humanities and social sciences also remains below 50 percent over all.

    We now face a different challenge: integrating a technology that touches every part of the knowledge economy. Generative AI adoption among graduate students has been swift and largely unsupervised: At Ohio State University, 48 percent of graduate students reported using ChatGPT in spring 2024. At the University of Maryland, 77 percent of students report using generative AI, and 35 percent use it routinely for academic work, with graduate students more likely than undergraduates to be routine users; among routine student users, 38 percent said they did so without instructor guidance.

    Some subskills, like mechanical formatting, will matter less in this landscape; higher-order capacities—framing problems, tailoring messages to audiences, exercising ethical discernment—will matter more. For example, in a 2025 National Association of Colleges and Employers survey, employers rank communication and critical thinking among the most important competencies for new hires, and in a 2024 LinkedIn report, communication was the most in-demand skill.

    Without structured guidance, students face conflicting messages: Some faculty ban AI use entirely, while others assume so-called digital natives will figure it out independently. This leaves students navigating an ethical and practical minefield with high stakes for their careers. A framework offers consistency and clear principles across advising contexts.

    We propose a four-stage framework that mirrors how professionals actually learn: explore, build, connect, refine. This approach adapts design thinking principles, the iterative cycle of prototyping and testing, to AI-augmented professional development. Students rapidly generate options with AI support, test them in low-stakes environments and refine based on feedback. While we use writing and communication examples throughout for clarity, this framework applies broadly to professional development.

    Explore: Map Possibilities and Surface Gaps

    Exploring begins by mapping career paths, fellowship opportunities and professional norms, then identifying gaps in skills or expectations. A graduate student can ask a generative AI chatbot to infer competencies from their lab work or course projects, then compare those skills to current job postings in their target sector to identify skills they need to develop. They can generate a matrix of fellowship opportunities in their field, including eligibility requirements, deadlines and required materials, and then validate every detail on official websites. They can ask AI to describe communication norms in target sectors, comparing the tone and structure of academic versus industry cover letters—not to memorize a script, but to understand audience expectations they will need to meet.

    Students should not, however, rely on AI-generated job descriptions or program requirements without verification, as the technology may conflate roles, misrepresent qualifications or cite outdated information and sources.

    Build: Learn Through Iterative Practice

    Building turns insight into artifacts and habits. With generative AI as a sounding board, students can experiment with different résumé architectures for the same goal, testing chronological versus skills-based formats or tailoring a CV for academic versus industry positions. They can generate detailed outlines for an individual development plan, breaking down abstract goals into concrete, time-bound actions. They can devise practice tasks that address specific growth areas, such as mock interview questions for teaching-intensive positions or practice pitches tailored to different funding audiences. The point is not to paste in AI text; it is to lower the barriers of uncertainty and blank-page intimidation, making it easier to start building while keeping authorship and evidence squarely in the student’s hands.

    Connect: Communicate and Network With Purpose

    Connecting focuses on communicating with real people. Here, generative AI can lower the stakes for high-pressure interactions. By asking a chatbot to act the part of various audience members, students can rehearse multiple versions of a tailored 60-second elevator pitch, such as for a recruiter at a career fair, a cross-disciplinary faculty member at a poster session or a community partner exploring collaboration. Generative AI can also simulate informational interviews if students prompt the system to ask follow-up questions or even refine user inputs.

    In addition, students can leverage generative AI to draft initial outreach notes to potential mentors that the students then personalize and fact-check. They can explore networking strategies for conferences or professional association events, identifying whom to approach and what questions to ask based on publicly available information about attendees’ work.

    Even just five years ago, completing this nonexhaustive list of networking tasks might have seemed an impossibility for graduate students with already crammed agendas. Generative AI, however, affords graduate students the opportunity to become adept networkers without sacrificing much time from research and scholarship. Crucially, generative AI creates a low-risk space to practice, while it is the student who ultimately supplies credibility and authentic voice. Generative AI cannot build genuine relationships, but it can help students prepare for the human interactions where relationships form.

    Refine: Test, Adapt and Verify

    Refining is where judgment becomes visible. Before submitting a fellowship essay, for example, a student can ask the generative AI chatbot to simulate likely reviewer critiques based on published evaluation criteria, then use that feedback to align revisions to scoring rubrics. They can A/B test two AI-generated narrative approaches from the build stage with trusted readers, advisers or peers to determine which is more compelling. Before a campus talk, they can ask the chatbot to identify jargon, unclear transitions or slides with excessive text, then revise for audience accessibility.

    In each case, verification and ownership are nonnegotiable: Students must check references, deadlines and factual claims against primary sources and ensure the final product reflects their authentic voice rather than generic AI prose. A student who submits an AI-refined essay without verification may cite outdated program requirements, misrepresent their own experience or include plausible-sounding but fabricated details, undermining credibility with reviewers and jeopardizing their application.

    Cultivate Expert Caution, Not Technical Proficiency

    The goal is not to train students as prompt engineers but to help them exercise expert caution. This means teaching students to ask: Does this AI-generated text reflect my actual experience? Can I defend every claim in an interview? Does this output sound like me, or like generic professional-speak? Does this align with my values and the impression I want to create? If someone asked, “Tell me more about that,” could I elaborate with specific details?

    Students should view AI as a thought partner for the early stages of professional development work: the brainstorming, the first-draft scaffolding, the low-stakes rehearsal. It cannot replace human judgment, authentic relationships or deep expertise. A generative AI tool can help a student draft three versions of an elevator pitch, but only a trusted adviser can tell them which version sounds most genuine. It can list networking strategies, but only actual humans can become meaningful professional connections.

    Conclusion

    Each graduate student brings unique aptitudes, challenges and starting points. First-generation students navigating unfamiliar professional cultures may use generative AI to explore networking norms and decode unstated expectations. International students can practice U.S. interview conventions and professional correspondence styles. Part-time students with limited campus access can get preliminary feedback before precious advising appointments. Students managing disabilities or mental health challenges can use generative AI to reduce the cognitive load of initial drafting, preserving energy for higher-order revision and relationship-building.

    Used critically and transparently, generative AI can help students at all starting points explore, build, connect and refine their professional paths, alongside faculty advisers and career development professionals—never replacing them, but providing just-in-time feedback and broader access to coaching-style support.

    The question is no longer whether generative AI belongs in professional development. The real question is whether we will guide students to use it thoughtfully or leave them to navigate it alone. The explore-build-connect-refine framework offers one path forward: a structured approach that develops both professional competency and critical judgment. We choose guidance.

    Ioannis Vasileios Chremos is program manager for professional development at the University of Michigan Medical School Office of Graduate and Postdoctoral Studies.

    William A. Repetto is a postdoctoral researcher in the Department of English and the research office at the University of Delaware.

    Source link