Tag: decisions

  • Understanding how inflation affects teacher well-being and career decisions

    Understanding how inflation affects teacher well-being and career decisions

    Key points:

    In recent years, the teaching profession has faced unprecedented challenges, with inflation emerging as a significant factor affecting educators’ professional lives and career choices. This in-depth examination delves into the complex interplay between escalating inflation rates and the self-efficacy of educators–their conviction in their capacity to proficiently execute their pedagogical responsibilities and attain the desired instructional outcomes within the classroom environment.

    The impact of inflation on teachers’ financial stability has become increasingly evident, with many educators experiencing a substantial decline in their “real wages.” While nominal salaries remain relatively stagnant, the purchasing power of teachers’ incomes continues to erode as the cost of living rises. This economic pressure has created a concerning dynamic where educators, despite their professional dedication, find themselves struggling to maintain their standard of living and meet basic financial obligations.

    A particularly troubling trend has emerged in which teachers are increasingly forced to seek secondary employment to supplement their primary income. Recent surveys indicate that approximately 20 percent of teachers now hold second jobs during the academic year, with this percentage rising to nearly 30 percent during summer months. This necessity to work multiple jobs can lead to physical and mental exhaustion, potentially compromising teachers’ ability to maintain the high levels of energy and engagement required for effective classroom instruction.

    The phenomenon of “moonlighting” among educators has far-reaching implications for teacher self-efficacy. When teachers must divide their attention and energy between multiple jobs, their capacity to prepare engaging lessons, grade assignments thoroughly, and provide individualized student support may be diminished. This situation often creates a cycle where reduced performance leads to decreased self-confidence, potentially affecting both teaching quality and student outcomes.

    Financial stress has also been linked to increased levels of anxiety and burnout among teachers, directly impacting their perceived self-efficacy. Studies have shown that educators experiencing financial strain are more likely to report lower levels of job satisfaction and decreased confidence in their ability to meet professional expectations. This psychological burden can manifest in reduced classroom effectiveness and diminished student engagement.

    Perhaps most concerning is the growing trend of highly qualified educators leaving the profession entirely for better-paying opportunities in other sectors. This “brain drain” from education represents a significant loss of experienced professionals who have developed valuable teaching expertise. The exodus of talented educators not only affects current students but also reduces the pool of mentor teachers available to guide and support newer colleagues, potentially impacting the professional development of future educators.

    The correlation between inflation and teacher attrition rates has become increasingly apparent, with economic factors cited as a primary reason for leaving the profession. Research indicates that districts in areas with higher costs of living and significant inflation rates experience greater difficulty in both recruiting and retaining qualified teachers. This challenge is particularly acute in urban areas where housing costs and other living expenses have outpaced teacher salary increases.

    Corporate sectors, technology companies, and consulting firms have become attractive alternatives for educators seeking better compensation and work-life balance. These career transitions often offer significantly higher salaries, better benefits packages, and more sustainable working hours. The skills that make effective teachers, such as communication, organization, and problem-solving, are highly valued in these alternative career paths, making the transition both feasible and increasingly common.

    The cumulative effect of these factors presents a serious challenge to the education system’s sustainability. As experienced teachers leave the profession and prospective educators choose alternative career paths, schools face increasing difficulty in maintaining educational quality and consistency. This situation calls for systematic changes in how we value and compensate educators, recognizing that teacher self-efficacy is intrinsically linked to their financial security and professional well-being.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Head or Heart? How do applicants make decisions about higher education?  

    Head or Heart? How do applicants make decisions about higher education?  

    The blog was kindly authored by Jenny Shaw, Director of Higher Education External Engagement at Unite Students.  

    Thousands of new undergraduates are taking their first steps into higher education, but what has brought them there? Have they weighed up all the evidence, or have they followed their heart? The answer is, of course, much more complicated. 

    The Unite Students Applicant Index, in partnership with HEPI, has tracked the experiences of prospective students since 2022, and this year we asked applicants to tell us, in their own words, why they chose their first-choice higher education provider. An initial analysis seemed to hint at the Teaching Excellence Framework’s influence on international students, but a subsequent deep-dive using inductive coding found a more complex and sometimes surprising story that reveals applicants’ desires, concerns and ambitions. 

    Academic excellence 

    International applicants tended to use terms such as academic excellence, good quality teaching or that the provider was best or excellent for a specific field of study. Some expressed their admiration of academic staff: expert faculty; very excellent teaching team. A few Chinese students described their chosen provider as zhuānyè, translated as ‘professional’ but which also implies specialist expertise. 

    While a few UK students talk positively about teaching in general, for example great academics; good education, their comments more often refer to a specific course. Frequent comments such as top rated for my course or it’s good for psychology suggest that subject-level rankings hold more weight than overall teaching quality. 

    Additionally, about one in four international applicants, though a much smaller proportion of those from the UK, are primarily motivated by overall reputation or prestige. International applicants tend to cite the fame of their chosen university and its place in international or UK rankings. UK applicants tend to be less specific, for example good uni; its reputation and they sometimes use the Russell Group as a signal of high reputation. They also rely on word of mouth or their own perception: I’ve heard good things; It seemed the best.  

    Another common motivation for provider choice is linked to the course of study, independent of course quality. This theme includes the availability of specific or niche course, the structure or content of a course, or a provider that offers an appealing range of courses. For a few international applicants, the provider has been recommended to them for a specific subject discipline.  

    Location, location, location 

    UK applicants have similar motivations, but their choice is more likely to be contingent on location. This could take the form of having to choose the best option that is commutable: It has forensic psychology as a study choice and it isn’t too far from home; or the course being a co-equal motivator alongside the location: I like the course and the city

    Location more broadly is a major motivation for UK applicants both as a primary and a secondary factor. For some, this is driven by the need to find a provider that is within commuting distance. But the theme also includes the choice of a particular location among UK applicants that reflects their own priorities and lifestyle preferences. This is in line with the growing importance of independence as a motivator: elsewhere in the survey almost 3 in 10 cite becoming more independent as a top motivator for going into higher education. While location can be a motivator for international applicants it is much less common and can be linked to personal recommendations or links to family and friends. 

    A few applicants were motivated by the supportiveness of a provider. This included being diverse which we know from the Living Black at University report can be important to applicants from racially minoritised groups. Having good support for international students was also mentioned. A few spoke about mental health or disability support, or just the perception of the university and its staff being understanding or lovely. 

    Employability is a surprisingly rare motivator. While other survey questions show the importance of employability generally, it’s surprisingly absent as a reason to choose a specific provider. When cited, it usually relates to the university’s offer or services around employability skills. Only occasionally it relates to the university’s track record of graduate employment. 

    Vibe check 

    However, a more common theme is the nebulous ‘vibe’, a theme that covers a range of emotionally-driven motivations. This may be a particular aesthetic on campus, sense of good fit or a lifestyle preference, and is surprisingly popular as a primary factor as well as being a secondary consideration in combination with other motivations. You may recognise it as a factor in your own higher education choices – I certainly do. 

    When it comes to the vibe, international applicants have a greater tendency to reference culture and perceptions of reputation: It has a long history and some beautiful buildings; Because it suits my style and it one of the best universities; The building is full of cultural atmosphere. They also express less specific sentiments such as: Great atmosphere; Because it is my ideal university.  

    UK applicants are more likely to say the university feels like a good fit or a comfortable place: It had a very welcoming feel; It looks like somewhere I’d fit in. They also express reasons that are less specific: Quite lowkey; I like the vibe; It’s soooooo cute. This may be a reflection of the importance of belonging in the student experience, and the higher levels of anxiety about belonging found among UK applicants elsewhere in the survey. 

    However, the last word should go to the applicants, both UK and international, who simply loved their chosen provider.  

    “This was my first choice because it has always been my goal and dream.” 

    “I love it!” 

    For them, this was reason enough. 

    You can read more from the Applicant Index at this link. 

    Source link

  • Trump Political Appointees in Charge of Grant Decisions

    Trump Political Appointees in Charge of Grant Decisions

    Wesley Lapointe/The Washington Post via Getty Images

    President Donald Trump is now requiring grant-making agencies to appoint senior officials who will review new funding opportunity announcements and grants to ensure that “they are consistent with agency priorities and the national interest,” according to an executive order issued Thursday. And until those political appointees are in place, agencies won’t be able to make announcements about new funding opportunities.

    The changes are aimed at both improving the process of federal grant making and “ending offensive waste of tax dollars,” according to the order, which detailed multiple perceived issues with how grant-making bodies operate. 

    The Trump administration said some of those offenses have included agencies granting funding for the development of “transgender-sexual-education” programs and “free services to illegal immigrants” that it claims worsened the “border crisis.” The order also claimed that the government has “paid insufficient attention” to the efficacy of research projects—noting instances of data falsification—and that a “substantial portion” of grants that fund university-led research “goes not to scientific project applicants or groundbreaking research, but to university facilities and administrative costs,” which are commonly referred to as indirect costs.  

    It’s the latest move by the Trump administration to take control of federally funded research supported by agencies such as the National Science Foundation, the National Institutes of Health and the Department of Energy. Since taking office in January, those and other agencies have terminated thousands of grants that no longer align with their priorities, including projects focused on vaccine hesitancy, combating misinformation, LGBTQ+ health and promoting diversity, equity and inclusion. 

    Federal judges have since ruled some of those terminations unlawful. Despite those rulings, Thursday’s executive order forbids new funding for some of the same research topics the administration has already targeted.  

    It instructs the new political appointees of grant-making agencies to “use their independent judgment” when deciding which projects get funded so long as they “demonstrably advance the president’s policy priorities.” 

    Those priorities include not awarding grants to “fund, promote, encourage, subsidize, or facilitate” the following:

    • “Racial preferences or other forms of racial discrimination by the grant recipient, including activities where race or intentional proxies for race will be used as a selection criterion for employment or program participation;
    • “Denial by the grant recipient of the sex binary in humans or the notion that sex is a chosen or mutable characteristic;
    • “Illegal immigration; or
    • “Any other initiatives that compromise public safety or promote anti-American values.”

    The order also instructs senior appointees to give preference to applications from institutions with lower indirect cost rates. (Numerous agencies have also moved to cap indirect research cost rates for universities at 15 percent, but federal courts have blocked those efforts for now.)

    Source link

  • Careers services can help students avoid making decisions based on AI fears

    Careers services can help students avoid making decisions based on AI fears

    How students use AI tools to improve their chances of landing a job has been central to the debate around AI and career advice and guidance. But there has been little discussion about AI’s impact on students’ decision making about which jobs and sectors they might enter.

    Jisc has recently published two studies that shine light on this area. Prospects at Jisc’s Early Careers Survey is an annual report that charts the career aspirations and experiences of more than 4,000 students and graduates over the previous 12 months. For the first time, the survey’s dominant theme was the normalisation of the use of AI tools and the influence that discourse around AI is having on career decision making. And the impact of AI on employability was also a major concern of Jisc’s Student Perceptions of AI Report 2025, based on in-depth discussions with over 170 students across FE and HE.

    Nerves jangling

    The rapid advancements in AI raise concerns about its long-term impact, the jobs it might affect, and the skills needed to compete in a jobs market shaped by AI. These uncertainties can leave students and graduates feeling anxious and unsure about their future career prospects.

    Important career decisions are already being made based on perceptions of how AI may change work. The Early Careers Survey found that one in ten students had already changed their career path because of AI.

    Plans were mainly altered because students feared that their chosen career was at risk of automation, anticipating fewer roles in certain areas and some jobs becoming phased out entirely. Areas such as coding, graphic design, legal, data science, film and art were frequently mentioned, with creative jobs seen as more likely to become obsolete.

    However, it is important not to carried away on a wave of pessimism. Respondents were also pivoting to future-proof their careers. Many students see huge potential in AI, opting for careers that make use of the new technology or those that AI has helped create.

    But whether students see AI as an opportunity or a threat, the role of university careers and employability teams is the same in both cases. How do we support students in making informed decisions that are right for them?

    From static to electricity

    In today’s AI-driven landscape, careers services must evolve to meet a new kind of uncertainty. Unlike previous transitions, students now face automation anxiety, career paralysis, and fears of job displacement. This demands a shift away from static, one-size-fits-all advice toward more personalised, future-focused guidance.

    What’s different is the speed and complexity of change. Students are not only reacting to perceived risks but also actively exploring AI-enhanced roles. Careers practitioners should respond by embedding AI literacy, encouraging critical evaluation of AI-generated advice, and collaborating with employers to help students understand the evolving world of work.

    Equity must remain central. Not all students have equal access to digital tools or confidence in using them. Guidance must be inclusive, accessible, and responsive to diverse needs and aspirations.

    Calls to action should involve supporting students in developing adaptability, digital fluency, and human-centred skills like creativity and communication. Promote exploration over avoidance, and values-based decision-making over fear, helping students align career choices with what matters most to them.

    Ultimately, careers professionals are not here to predict the future, but to empower all students and early career professionals to shape it with confidence, curiosity, and resilience.

    On the balance beam

    This isn’t the first time that university employability teams have had to support students through change, anxiety, uncertainty or even decision paralysis when it comes to career planning, but the driver is certainly new. Through this uncertainty and transition, students and graduates need guidance from everyone who supports them, in education and the workplace.

    Collaborating with industry leaders and employers is key to ensuring students understand the AI-enhanced labour market, the way work is changing and that relevant skills are developed. Embedding AI literacy in the curriculum helps students develop familiarity and understand the opportunities as well as limitations. Jisc has launched an AI Literacy Curriculum for Teaching and Learning Staff to support this process.

    And promoting a balanced approach to career research and planning is important. The Early Careers Survey found almost a fifth of respondents are using generative AI tools like ChatGPT and Microsoft Copilot as a source of careers advice, and the majority (84 per cent) found them helpful.

    While careers and employability staff welcome the greater reach and impact AI enables, particularly in challenging times for the HE sector, colleagues at an AGCAS event were clear to emphasise the continued necessity for human connection, describing AI as “augmenting our service, not replacing it.”

    We need to ensure that students understand how to use AI tools effectively, spot when the information provided is outdated or incorrect, and combine them with other resources to ensure they get a balanced and fully rounded picture.

    Face-to-face interaction – with educators, employers and careers professionals – provides context and personalised feedback and discussion. A focus on developing essential human skills such as creativity, critical thinking and communication remains central to learning. After all, AI doesn’t just stand for artificial intelligence. It also means authentic interaction, the foundation upon which the employability experience is built.

    Guiding students through AI-driven change requires balanced, informed career planning. Careers services should embed AI literacy, collaborate with employers, and increase face-to-face support that builds human skills like creativity and communication. Less emphasis should be placed on one-size-fits-all advice and static labour market forecasting. Instead, the focus should be on active, student-centred approaches. Authentic interaction remains key to helping students navigate uncertainty with confidence and clarity.

    Source link

  • Three-quarters of global study decisions determined by cost

    Three-quarters of global study decisions determined by cost

    International students are increasingly looking for affordable destinations and alternative programs rather than give up on study abroad due to increasing costs, a new ApplyBoard survey has shown.  

    While 77% of surveyed students ranked affordable tuition fees as the most important factor shaping study decisions, only 9% said they planned to defer their studies based on these concerns, according to a recent student survey from ApplyBoard edtech firm.  

    “Students weren’t planning to wait for things to change,” said ApplyBoard senior communications manager Brooke Kelly: “They’re considering new destinations, adjusting which programs they apply to, and accepting that they have to balance work with study, but they’re still planning to study abroad,” she maintained.  

    Just over one in four students said they were considering different study destinations than originally planned, with Denmark, Finland, Nigeria and Italy the most popular emerging destinations.  

    Additionally, 55% of students said they would have to work part-time to afford their study abroad program.  

    After affordability, came employability (57%), career readiness (49%), high-quality teaching (47%), and program reputation (45%), as factors shaping student decision-making.  

    With students increasingly thinking about work opportunities, software and civil engineering topped students’ career choices, with nursing as the second most popular field. Tech fields including IT, cybersecurity, and data analysis also showed strong interest. 

    What’s more, interest in PhD programs saw a 4% rise on the previous year, while over half of students were considering master’s degrees, indicating that students are increasingly prioritising credentials and post-study work opportunities.  

    [Students are] considering new destinations, adjusting which programs they apply to, and accepting that they have to balance work with study, but they’re still planning to study abroad

    Brooke Kelly, ApplyBoard

    The study surveyed over 3,500 students from 84 countries, with the most represented countries being Nigeria, Ghana, Canada, Pakistan, Bangladesh and India.  

    Given its share of international students, it should be noted that China is absent from the top 10 most represented countries.  

    As students’ priorities shift and currencies fluctuate, “diversity will be key to mitigate against increased volatility and to ensure campuses remain vibrant with students from all around the world,” said Kelly.  

    Meanwhile, institutions should increase communication about scholarships and financial aid, offer more hybrid learning experiences and highlight programs on different timelines such as accelerated degrees, she advised.  

    While alternative markets are on the rise, 65% of respondents said they were only interested in studying in one of the six major destinations, with Canada followed by the US, UK, Australia, Germany and Ireland, in order of popularity.  

    Despite Canada’s international student caps, the largest proportion of students said they were ‘extremely’, ‘very’ or ‘moderately’ interested in the study destination, highlighting its enduring appeal among young people.  

    While stricter controls on post study work were implemented in Canada last year, in a rare easing of policies, the IRCC recently said that all college graduates would once again be eligible for post study work.  

    This change, combined with the fact that international students can still be accompanied by their dependants while studying in Canada, is likely to have contributed to it maintaining its attractiveness, according to Kelly.  

    Source link

  • 6 steps to a future-focused blueprint: Supporting students in making career decisions

    6 steps to a future-focused blueprint: Supporting students in making career decisions

    The OECD’s (Organization for Economic Co-operation and Development) study on teenage career uncertainty underscores a growing concern: 40% of 15-year-olds lack clear career plans, a figure that has risen by over 50% since 2018. This uncertainty is linked to poorer employment outcomes in adulthood, particularly for students with lower academic performance. The study emphasizes career development programs can significantly reduce this uncertainty by helping students explore interests and align education with potential career paths. However, data from PISA 2022 shows that too few students participate in such initiatives, suggesting a need for broader access and promotion of these programs. 

    The issue that frequently comes to the forefront is the potential disconnect between and among CTE programs, counseling, and academic standards-based classrooms. In conversations, all appear to believe in the interconnectedness of these three areas, yet they are often separate and distinct for a variety of reasons. Helping students prepare for their lives after school and for potential careers needs to be an integral part of all school’s educational vision. This is often demonstrated in graphics and words through a school’s mission, vision, and Portrait of a Graduate. 

    How can educators bring CTE, counseling, and standards-based classrooms together? Let’s look at six strategies through the lens of a curricular-focused learning environment: 

    Facilitating Career Exploration, Awareness, & Application 

    Counselors play a vital role in the success of all students, helping students identify their strengths, interests, and values through a variety of tools including interest assessments and career inventories. They provide one-on-one or group sessions to help students explore specific careers tied to their interests. These activities can guide students toward careers featured in classrooms, courses, and programs. 

    Interdisciplinary Career Units 

    Career exploration and application opportunities can be easily woven into all subjects. What students are learning in the classroom and the passions they are discovering can be connected to potential careers they may want to consider. For example, math classes could include performance tasks around topics such as financial literacy or architecture, requiring teamwork and communication to solve problems. Language Arts related careers could include a grant writer, social media marketer, public relations specialist, or a journalist with projects and lessons easily connected with essential content related to reading, writing, speaking, and listening. 

    Partnerships between CTE programs and general education teachers can help align these activities with broader learning goals and within and across career clusters and pathways. 

    Project-Based Learning (PBL) 

    Incorporating an instructional strategy such as PBL is something that is common for CTE teachers. Using this pedagogy and incorporating future-ready skills can involve students working on complex, real-world problems over an extended period, requiring them to think critically, collaborate, and communicate effectively. Defined utilizes career-themed projects that can be integrated across subjects, such as developing a marketing plan in business classes or designing solutions for community issues in science. These experiences make skills relevant to future careers while aligning with academic standards. 

    Embedded Communication Training 

    Incorporating oral presentations, team discussions, research, and report writing into assignments across all subjects ensures consistent practice. Weaving active communication strategies into learning activities helps students practice collaboration and interpersonal skills. Projects that require students to do presentations and/or build communication documents that are informative or persuasive promote formative and summative assessments of communication skills. 

    Assessment & Reflection 

    Self-reflections and teacher feedback through the lens of reflecting on the real-world connected processes and content applications to careers through their learning can be powerful “a-ha” moments for students. The use of rubrics for evaluating skills such as problem-solving can help teachers guide students as they practice skills throughout their learning experience. Evidence of practice and growth over time can also be part of an evidenced-based portfolio for the student. Bringing these ideas together can help students understand the interconnectedness between careers, content, skills, and projects. 

    Collaboration with Employers & Community Partners 

    Schools can establish partnerships with local businesses to provide interactive career days, mentorship programs, and soft skills training. Exposing students to the workplace through job shadowing, internships, or part-time work enables them to understand real-world career dynamics. When possible, incorporating on-site visits through field trips can help introduce students to different work environments and let them see first-hand the connections between school-based learning and future opportunities. 

    Bringing professionals into classrooms for workshops or mentorship allows students to practice skills in real-world contexts. Additionally, business and industry experts can work collaboratively with a curriculum team to create performance tasks, projects, and virtual internships to help students bridge the world of work, academic standards, and skill development and practice. 

    To learn more about how you can support and engage your students in career-connected deeper learning, please click here

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Making better decisions on student financial support

    Making better decisions on student financial support

    By Peter Gray, Chief Executive and Chair of the JS Group.

    As the higher education sector starts to plan its next budget cycle and many may need to make savings, there is a concern about the impact of any cuts on students and how this could negatively affect their university experience and performance.

    Universities are bound to look at a range of options to save money, especially given the stormy operating context. But one less-often highlighted aspect of university finances is the cost (and benefit) of the additional financial support universities devote to many of their students. Through cash, vouchers and other means, many universities provide financial help to support with the costs of living and learning.

    Using Universities UK’s annual sector figures as one indicator, roughly 5% of universities’ overall expenditure has gone towards financial support and outreach, equivalent to around £2.5 billion. Although some of this money will inevitably not go directly to students themselves, this is still a significant amount of spending.

    There are, naturally, competing tensions when it comes to considering any changes to targeted financial support. With significant financial pressures on students, exacerbated by the cost-of-living crisis, there is always a very justifiable case for more money. However, with the significant financial pressures universities are facing, there is an equally justifiable case to control costs to ensure financial sustainability. Every university has to manage this tension and trade-offs are inevitable when understanding just how much financial support to give and to whom.

    In many respects, the answers to those questions are partially governed by Access & Participation Plans, with the clear intention that these financial interventions really change student outcomes. However, properly measuring those outcomes is incredibly difficult without a much deeper understanding of student ‘need’ – and understanding these needs comes from being able to identify student spending behaviour (and often doing this in real-time).

    It always amazes me that some APPs will state that financial support ‘has had a positive impact on retention’ and some quite the opposite and I think part of this is a result of positioning financial support from the university end of the telescope rather than the student end.

    Understanding real and actual ‘need’ helps to change this. Knowing perhaps that certain groups (for example Asylum Seekers or Gypsy, Roma, Traveller, Showman and Boater students) across the sector will have similar needs would be helpful and data really help here. Having, using, and sharing data will allow us to draw a bigger picture and better signpost to where interventions are most effectively deployed so those particular groups of students who need support are achieving the right outcomes.

    Technology is at hand to help: Open Banking (for example) is an incredible tool that not only can transform how financial support can be delivered but also helps to build an understanding of student behaviour.

    Lifting the bonnet and understanding behaviour poses additional questions, such as: When is the right time to give that support? And what form should that support take?

    I am a big proponent of providing financial support as soon as a student starts. When I talk to universities, however, it is clear that the data needed to identify particular groups of students are not readily available at the point of entry and students’ needs are not met. Giving a student financial support in December, when they needed it in September, is not delivering at the point of student need, it is delivering at the point where the university can identify the student. I think there is a growing body of evidence that suggests the large drop off in students between September and December is, in part, because of this.

    Some universities in the sector give a small amount of support to all students at the start of the year, knowing that by doing so they will ensure that they can meet the immediate needs of some students. But clearly, some money must also go to those who do not necessarily need it.

    However, and this is where the maths comes in, if the impact of that investment keeps more students in need at university, then I would argue that investment is worth the return. And the maths is simple: it really doesn’t take many additional students to stay to have a profoundly positive impact on university finances. Thus it is certainly worthy of consideration.

    To me, this is about using financial support to drive the ultimate goal of improving student outcomes, especially the retention of students between September and December, which is when the first return is made, where the largest withdrawal is seen and where the least amount of financial support is given.

    As to the nature or format of support: of course, in most cases, it is easier to provide cash. However, again, this is about your investment in your student, and, for example, if you have students on a course with higher material and resource costs, or students who are commuting, then there is an argument to consider more in-kind support and using data to support that decision.

    Again, I am a proponent of not just saying ‘one size fits all’. Understanding student need is complex, but solutions are out there. It is important to work together to identify patterns of real student need and understand the benefits of doing so.

    My knowledge draws on JS Group’s data, based on the direct use of £40 million of specialist student financial support to more than 160,000 students across 30 UK universities in the last full academic cycle.

    I have also looked at the student views on such funding and there is an emerging picture that connects student financial support with continuation, participation and progress. A summary of student feedback is here: https://jsgroup.co.uk/news-and-views/news/student-feedback-report-january-2025/

    The real positive of this is that everyone wants the same goal: for fewer students to withdraw from their courses and for those students to thrive at university and be successful. We need to widen the debate on how financial support is delivered, when, and in what format to draw together a better collective understanding of student need and behaviour to achieve that goal.

    Source link

  • 4 Considerations for Using Salary Data to Inform Compensation Decisions – CUPA-HR

    4 Considerations for Using Salary Data to Inform Compensation Decisions – CUPA-HR

    by Missy Kline | November 15, 2022

    Editor’s note: This blog post, originally published in April 2019, has been updated with additional resources and related content.

    Salary benchmarking is not one-size-fits-all — especially when you’re looking at groups as varied as administrators, professionals, staff and faculty on a college or university campus that is unique in its combination of Carnegie class, affiliation, regional location and mission. The question, then, is how to tailor your benchmarking efforts to take these variables into account and choose data that is appropriate to your unique needs.

    Here are four considerations to help you make the best use of salary data for compensation budget planning for your faculty and staff:

    1) Which institutions should your institution’s salaries be benchmarked against? Making the right comparisons — using position-specific data and carefully selected peers — can make all the difference when planning salaries that will make your institution competitive in the labor market. When you use CUPA-HR’s DataOnDemand, you can narrow down peer institutions by one or several institution-level criteria such as affiliation (public, private indephttp://cupahr.org/surveys/dataondemand/endent or private religious), Carnegie classification, enrollment size, geographic region, total expenses or other characteristics. Remember, balance is key: a larger comparison group gets you more robust data for comparison, but you must also make sure you are comparing to the right types of institutions that make sense for your goals.

    2) Not all faculty are the same. Tenure track faculty, non-tenure track teaching faculty, non-tenure track research faculty and adjunct faculty may each require unique compensation strategies, as do faculty members from different disciplines and ranks. Will the same salary increase help retain both tenured and non-tenured faculty? Does collective bargaining impact salary targets for some, but not all, of these faculty sub-groups? Are there unique, fast-growing, or in-demand departments/disciplines that require a separate strategy?

    3) Keep in mind that administrator salaries are broadly competitive. Like faculty, many administrative positions in higher ed are competitive at a national level. Often, institutions seek administrators with experience at other institutions of a similar size or mission, and with this experience and mobility comes an expectation of a competitive salary. As higher ed moves toward a “business model” where innovative leadership strategies are displacing more traditional shared governance models, finding administrators with the appropriate skills and expertise is becoming increasingly competitive, not only within higher education but sometimes against the broader executive employment market.

    4) Employment competition varies for staff and professionals. Many non-exempt staff are hired from within local labor markets, and therefore other institutions or companies in your state or local Metropolitan Statistical Area might be a better salary comparison than a nationwide set of peer institutions. Exempt or professional staff, however, may be more limited to competition from the higher ed sector, perhaps on a state or regional level. In addition, changes brought about by the pandemic (e.g., remote work opportunities, a desire to relocate) have made many professional positions more globally competitive. Are your institution’s salaries for these employees appropriately scoped for the market in which you need to compete?

     

    Additional Articles and Resources

    How One College Is Using Salary Data to Ensure Pay Equity and Market-Par Compensation

    Compensation Programs/Plans, Executive Compensation in Higher EdEqual Pay Act (CUPA-HR Toolkits)

    Working in a Fish Bowl: How One Community College System Navigated a Compensation Study in a Transparent Environment (Higher Ed HR Magazine)



    Source link