Tag: Data

  • Ed data goes dark: Why it matters (opinion)

    Ed data goes dark: Why it matters (opinion)

    When President Donald Trump and Elon Musk’s Department of Government Efficiency set out to slash billions from the federal budget, it puzzled me as to why one of their first targets was an obscure data collection and research agency, the Institute of Education Sciences, a relatively modest operation buried deeply in the corridors of the Department of Education, and indeed one few had ever heard of. Since then, the newly installed secretary of education has ordered a review of all the department’s functions as part of what she ominously called the department’s “momentous final mission.”

    A conversation with a trusted colleague helped me understand the cuts to IES, noting that the action should be seen as part of a new breed of autocrats around the world who seek to control information to hide the impacts of their actions from the public. In contemporary authoritarian governments, control of information—or what has come to be known today as informational autocracy—often substitutes for brute force.

    Similar to how the Trump administration is seizing control of the White House press pool, canceling contracts for independent, high-quality education research is another way of controlling information. As Democratic lawmakers wrote in a Feb. 21 letter decrying the cuts, “The consequences of these actions will prevent the public from accessing accurate information about student demographics and academic achievement, abruptly end evaluations of federal programs that ensure taxpayer funds are spent wisely, and set back efforts to implement evidence-based reforms to improve student outcomes.”

    IES houses a vast warehouse of the nation’s education statistics. Data collected by the agency is used by policymakers, researchers, teachers and colleges to understand student achievement, enrollment and much more about the state of American education. With IES being among the largest funders of education research, cutting it limits public access to what’s happening in the nation’s schools and colleges.

    Claiming to eliminate waste and corruption, Musk’s first round of cuts involved canceling what DOGE initially said were nearly $900 million in IES contracts (though, as subsequent reporting has since revealed, DOGE’s math doesn’t add up and the canceled contracts seem to amount to much less). A second round purportedly sliced another $350 million in contracts and grants. It’s unclear how much more is destined to be chopped, since these may only be the first in a series of cuts designed to completely dismantle the Education Department. Though a department spokesperson initially said that the cuts would not affect the National Assessment of Educational Progress, a standardized test known as the nation’s report card, and the College Scorecard, which allows citizens to search for and compare information about colleges, we’ve since seen the cancellation of a national NAEP test for 17-year-olds.

    In the Obama years, public data helped reveal bad actors among for-profit colleges, which were receiving millions in federal aid while delivering inferior education to poor and working-class students who yearned for college degrees. Since so few actually completed, what many got instead was crushing college debt. Luckily, good data helped drive nearly half of all for-profit programs to shut down. Publicly disseminated data exposes where things go wrong. But you can’t track down con men without evidence.

    Ideally, in a well-functioning democracy, with a richly informed public, data helps us reach informed decisions, leading to greater accountability and enabling us to hold officials responsible for their actions. With access to reliable information about what’s happening behind closed doors, data helps us understand what may be going on, even to protest actions we may oppose.

    Lately, however, things aren’t looking good. Since Trump and his top officials have slashed race-conscious programs and moved to prohibit funding for certain areas of research, higher ed leadership has remained mostly silent, with only a handful of college presidents protesting. Most have shrunk into the wings, cowed by Trump’s power to defund institutions. It already has the eerie feeling of watching your step.

    Shutting down potentially revealing data collection is perhaps the least worrisome page in an autocrat’s playbook. As Trump continues to follow the authoritarian path set by leaders in Hungary, Turkey and elsewhere, we should expect other, more damaging and more frightening higher ed moves that have been imposed by other autocrats—selecting college presidents, controlling faculty hiring and advancement, punishing academic dissent, imposing travel restrictions.

    Just a few months ago, there was comfort in knowing everything was there—data on enrollments, graduation rates, participation rates of women and other groups. All very neatly organized and accessible whenever you wanted. Even though some found IES technology old and clunky, it felt like higher ed was running according to a reliable scheme, that you could go online and open data files as in a railroad timetable. Without it, there might be a train wreck ahead and you wouldn’t know it until it was too late. Now these luxurious numbers may soon be lost, with decades of America’s academic history pitched into digital darkness.

    It’s frightening to realize that we’ll no longer be operating on solid intelligence. That we’ll no longer have guideposts, supported by racks of sensibly collected numbers to tell us if we’re on the right path or if we’re far afield. Trump’s wrecking ball has smashed our confidence, a confidence built on years of reliable data. We’ll soon be in the dark.

    Robert Ubell is vice dean emeritus of online learning at New York University’s Tandon School of Engineering and senior editor of CHLOE 9, the ninth national survey of higher ed chief online learning officers. A collection of his essays on virtual education, Staying Online: How to Navigate Digital Higher Education, was published by Routledge.

    Source link

  • Equal Pay Day Data: On Average, Women in Higher Ed Are Paid 82 Cents on the Dollar

    Equal Pay Day Data: On Average, Women in Higher Ed Are Paid 82 Cents on the Dollar

    by Christy Williams | March 5, 2025

    Since 1996, the National Committee on Pay Equity has acknowledged Equal Pay Day to bring awareness to the gap between men’s and women’s wages. This year, Equal Pay Day is March 25 — symbolizing how far into the year women must work to be paid what men were paid in the previous year.

    To help higher ed leaders understand, communicate and address gender pay equity in higher education, CUPA-HR has analyzed its annual workforce data to establish Higher Education Equal Pay Days for 2025. Tailored to the higher ed workforce, these dates observe the gender pay gap by marking how long into 2025 women in higher ed must work to make what White men in higher ed earned the previous year.

    Higher Education Equal Pay Day falls on March 8, 2025, for women overall, which means that women employees in higher education worked for more than two months into this year to gain parity with their White male colleagues. Women in the higher ed workforce are paid on average just 82 cents for every dollar a White man employed in higher ed makes.

    Highlighting some positive momentum during this Women’s History Month, some groups of women are closer to gaining pay equity. Asian American women in higher ed worked only a few days into this year to achieve parity on January 4 — an encouraging jump from January 14 in 2024.

    But the gender pay gap remains for most women, and particularly for women of color. Here’s the breakdown of the gender pay gap in the higher ed workforce, and the Higher Education Equal Pay Day for each group.* These dates remind us of the work we have ahead.

    • March 8 — Women in Higher Education Equal Pay Day. On average, women employees in higher education are paid 82 cents on the dollar.
    • January 4 — Asian Women in Higher Education Equal Pay Day. Asian women in higher ed are paid 99 cents on the dollar.
    • March 5 — White Women in Higher Education Equal Pay Day. White women in higher ed are paid 83 cents on the dollar.
    • March 29 — Native Hawaiian/Pacific Islander Women in Higher Education Equal Pay Day. Native of Hawaii or Pacific Islander women in higher ed are paid 76 cents on the dollar.
    • April 4 — Black Women in Higher Education Equal Pay Day. Black women in higher ed are paid 75 cents on the dollar.
    • April 11 — Hispanic/Latina Women in Higher Education Equal Pay Day. Hispanic/Latina women in higher ed are paid 73 cents on the dollar.
    • April 24 — Native American/Alaska Native Women in Higher Education Equal Pay Day. Native American/Alaska Native women are paid just 69 cents on the dollar.

    CUPA-HR research shows that pay disparities exist across employment sectors in higher ed — administrators, faculty, professionals and staff — even as the representation of women and people of color has steadily increased. But with voluntary turnover still not back to pre-pandemic levels, not addressing pay disparities could be costly.

    CUPA-HR Resources for Higher Education Equal Pay Days

    As we observe Women’s History Month and Higher Education Equal Pay Days for women, we’re reminded that the quest for equal pay is far from over. But data-driven analysis with the assistance of CUPA-HR research can support your work to create a more equitable future.

    CUPA-HR’s interactive graphics track the gender and racial composition of the higher ed workforce, based on data from CUPA-HR’s signature surveys. The following pay equity analyses control for position, indicating that any wage gaps present are not explained by the fact that women or people of color may have greater representation in lower-paying positions:


    *Data Source: 2024-25 CUPA-HR Administrators, Faculty, Professionals, and Staff in Higher Education Surveys. Drawn from 707,859 men and women for whom race/ethnicity was known.



    Source link

  • International Student Aspirations Increasingly Align With The Skills Needed To Propel UK Growth, ApplyBoard’s Internal Data Shows

    International Student Aspirations Increasingly Align With The Skills Needed To Propel UK Growth, ApplyBoard’s Internal Data Shows

    • Justin Wood is Director, UK at ApplyBoard.

    Millions of international students have used the ApplyBoard platform to search for international study opportunities.[1] For many of these students, searching for courses in Australia, Canada, Ireland, the United Kingdom, and the United States is one of the first steps in their study abroad journey. This proprietary search data reveals a leading indicator of changing student preferences.

    What UK Fields of Study did International Students Search for in 2024?

    After the Sunak Government announced the tightening rules on international student dependants and a review into the graduate route, the UK saw a significant contraction in interest from international students in 2024—applications declined by 14% year-over-year, while dependant applications dropped by 84%. The good news for a struggling sector is that early signs point to positive momentum in 2025, with higher enrolments for many of the institutions that offer a January intake. Enroly data suggests a 23% increase in January 2025 compared to January 2024, and ApplyBoard has experienced growth at three times this rate.

    ApplyBoard’s search trends reinforce these early signs: interest in UK courses jumped 25% in 2024 vs. 2023. With search behaviour often signaling future application trends, this surge suggests the UK’s positive momentum in early 2025 could continue throughout the year. Beyond this overall growth, shifting field-of-study preferences highlight how international applicants are adapting to the UK’s changing landscape:

    Health fields saw the largest proportional increase among UK searches, climbing nearly four percentage points to 12.8% of all searches. This growing interest aligns with the UK’s expanding healthcare sector, which is projected to add 349,000 jobs by 2035, growing 7% from 2025. Likewise, the information technology sector is expected to grow 8% over the next decade, which aligns with shifting student preferences—ApplyBoard platform data shows Engineering and Technology accounted for 17% of searches in 2024, up two percentage points year-over-year.

    Interest in the Sciences also expanded, rising from 13% in 2023 to 16% in 2024. Alongside the gains in Health and Engineering and Technology, this shift underscores how international student priorities are increasingly aligning with long-term global workforce demands.

    How International Students are Navigating UK Study Fields

    This alignment comes at a time when interest in UK courses is rising. Interest in UK courses grew significantly among several key student populations in 2024, with searches from students in Bangladesh, Sri Lanka, Ghana, and Saudi Arabia doubling year-over-year. Meanwhile, student searches from Nigeria and Pakistan saw substantial gains, rising 66% and 40%, respectively. However, searches from Nepalese students experienced the most dramatic increase, with searches tripling compared to 2023.

    Further supporting the possibility that the UK’s positive momentum in January 2025 will continue throughout the year, searches from most key student demographics reached an all-time monthly high in either December 2024 or January 2025.

    The graphic below illustrates how major student populations explored different fields of study in the UK on the ApplyBoard platform last year:

    Student interest in Health fields was strongest among Ghanaian (22%), Nigerian (20%), and Saudi Arabian students (16%). Compared to the previous year, the share of searches for this field rose by six percentage points among Ghanaian students and five percentage points among Nigerian students. Additionally, the proportion of Health searches among Sri Lankan students doubled over this period.

    By comparison, the Sciences were a priority across all nine student populations, making up at least 14% of UK course searches. Students from Pakistan (18%), Saudi Arabia (18%), and Bangladesh (16%) had the highest proportion of Science-related searches. Notably, seven of the nine key student populations devoted a greater share of their searches to the Sciences in 2024 than in the previous year.

    Engineering and Technology also accounted for at least 14% of searches among these major student populations, although Sri Lankan (29%), Saudi Arabian (26%), and Chinese (23%) students showed the highest engagement in this field. Additionally, eight of the nine key student populations allocated a larger share of their searches to Engineering and Technology in 2024. As student interest in UK courses continues to grow, institutions can strengthen their appeal by aligning their portfolio with evolving student priorities and workforce needs.

    The UK’s Edge: Where Student Interest Outpaces Canada and the US

    Understanding where the UK sees higher proportional interest in key fields of study compared to Canada and the US can reveal important competitive advantages for institutions and better inform strategic recruitment strategies. This interactive visualization allows you to explore student interest by field and destination, filterable by top student populations:

    Health-related fields accounted for 25% of searches for UK institutions among Filipino students—three percentage points higher than their searches for Canada and the US. Likewise, 22% of Ghanaian students were interested in UK-based Health courses, outpacing the interest shown for both Canadian (21%) and American (20%) options.

    In Engineering and Technology, 29% of Sri Lankan students’ searches for UK courses were in this field—matching their interest in US study but well surpassing their searches for Canada (24%).

    Social-related fields like Law, Social Sciences, and Teaching captured 10% of Pakistani searches for the UK, outpacing that for Canada (6%) and the US (7%). A similar trend occurred among Bangladeshi students, with 10% of their UK-based searches occurring for social-related fields compared to 7% of Canada and 6% for the US.

    Leveraging Search Trends to Shape Future Recruitment

    Search trends serve as a leading indicator of shifting student interest, often signaling future application patterns. The surge in searches for UK courses—particularly in high-demand fields like health, engineering, and sciences—suggests a growing alignment between student priorities and workforce needs. By analysing these trends, institutions can proactively refine course offerings and recruitment strategies to attract top international talent. As demand continues to evolve, leveraging real-time search insights enables institutions to stay ahead of market shifts, ensuring they meet student expectations while strengthening their global competitiveness. Understanding where the UK holds a competitive edge will be key to optimizing outreach and course development in 2025 and beyond.


    [1] In the past, ApplyBoard platform search data was generated based on button clicks on a page, while the new search data is generated by any changes made to the page’s filters (destination, field of study, etc.) As a result, the new search count, if tallied using the previous search data approach, would be significantly inflated compared to the original search count. To make the search counts more comparable, we changed our methodology as of August 2024 to use unique entries per user within each hour.

    Source link

  • Federal judge bars DOGE from accessing student data

    Federal judge bars DOGE from accessing student data

    A federal judge temporarily barred Elon Musk’s Department of Government Efficiency from accessing sensitive student data on Monday, after the American Federation of Teachers sued over privacy concerns. 

    The judge, Deborah Boardman of the District Court of Maryland, said the federal government had not provided convincing evidence that DOGE needed the information to achieve its goals. Last week, in a separate case brought by the University of California Student Association against the Education Department, a different judge declined to bar DOGE from accessing student data, saying the plaintiffs hadn’t shown any harm done. But Boardman, a Biden appointee, argued that DOGE staff being given access was enough to merit the injunction. 

    Education Department staff and student advocates raised concerns about DOGE employees’ access to student loan and financial aid data, which includes troves of uniquely sensitive, personally identifiable information. The injunction prevents the office from executing what Musk has referred to as an “audit” of the student loan system for at least two weeks while the lawsuit is ongoing, as well as from accessing financial aid data.

    “We brought this case to uphold people’s privacy, because when people give their financial and other personal information to the federal government—namely to secure financial aid for their kids to go to college, or to get a student loan—they expect that data to be protected,” AFT president Randi Weingarten wrote in a statement. 

    The court-ordered stoppage is the latest in a string of injunctions issued against Musk and the Trump administration in recent weeks, as lawsuits pile up against the administration’s attempts to swiftly upend the federal bureaucracy. On Friday, a federal judge blocked Trump from enforcing large parts of his executive order against diversity, equity and inclusion initiatives.

    Source link

  • Data stories from Achieving the Dream’s latest award winners

    Data stories from Achieving the Dream’s latest award winners

    Each year, Achieving the Dream lifts up at least one community college in its network for adopting practices and strategies leading to a student-focused culture, notable increases in student outcomes and a reduction of equity gaps.

    To be eligible for the Leah Meyer Austin Award, an institution must demonstrate four-year improvement of at least three percentage points in the IPEDS on-time completion for the level of associated credential awarded, or have been selected as one of the top 150 colleges in the Aspen Prize for Community College Excellence. The achievements of this year’s honorees—Chattanooga State Community College in Tennessee and Southwestern Oregon Community College—show how a holistic approach to student success that exists through the institution can result in whole-college transformation.

    Setting the bar: In evaluating applicants, ATD considers gateway metrics, including leading indicators (early momentum metrics) and lagging indicators (completion or transfer), with substantial improvement of three percentage points or more over three years.

    Equity metrics may highlight data such as the equity gap improvement between part-time and full-time student outcomes or between Pell-eligible and non-Pell-eligible students. Substantial improvement means closing or narrowing equity gaps over three years by at least two percentage points.

    The following data demonstrate not just what Chattanooga State Community College and Southwestern Oregon Community College did to earn their honor, but also ways that other institutions can tell their own data stories.

    Chattanooga State Community College actions and results: The Vision 2027 strategic plan has inspired a shift from 15-week to seven-week terms, more personalized academic advising, strengthened commitments to basic needs assistance and wraparound support services, and implementation of an affordable course materials program.

    • Fall-to-fall persistence rate from the fall 2019 cohort to the fall 2022 cohort saw a 7.1-percentage-point gain.
    • The credit completion rate jumped from 54.6 percent among the 2020 fall cohort to 66.4 percent among the fall 2023 cohort.
    • Articulation agreements and course road maps related to Tennessee Transfer Pathways resulted in an 8.2-percentage-point climb in the rate of students who transfer and earn a baccalaureate degree within six years of matriculating between the fall 2015 cohort and the fall 2018 cohort.
    • The adoption of a co-requisite model, with embedded tutors, for gateway English and math courses led to a rise in gateway math completion from 38.5 percent for the fall 2020 cohort to 49.5 percent for the fall 2023 cohort. Completion rates for gateway English courses, meanwhile, grew from 49.3 percent to 66.6 percent in that time frame. Approximately 45 to 48 percent of the college’s student population is still developing essential college-level academic skills.

    Southwestern Oregon Community College actions and results: This rural institution’s recent efforts have included engaging and supporting its community’s adult and part-time learner populations, such as by creating targeted student orientations, evaluating community practices and its portfolio of academic and workforce programs, meeting the special financial needs of first-generation adult learners, and improving online services (40 percent of Southwestern’s overall student body are online learners).

    • In comparing the 2017 cohort to the 2020 cohort, the four-year completion rate among part-time learners improved by 8.7 percentage points, narrowing the equity gap between adult learners and traditional-aged learners by 3.2 percentage points. Between adult learners and traditional-aged learners, the gap narrowed by 6.7 percentage points, as the rate of completion among the former rose 12.3 percentage points.
    • The equity gap between first-generation and continuing-generation learners in fall-to-fall persistence narrowed by three percentage points, from 8.2 percent in the fall 2019 cohort to 5.2 percent in the fall 2022 cohort.
    • From the fall 2017 cohort to the fall 2020 cohort, the overall four-year completion rate grew 6.6 percentage points, and the rate at which students transfer and earn a baccalaureate degree (despite severe geographical hardships) rose 3.7 percentage points from the fall 2015 cohort to the fall 2018 cohort.

    More information on both winners can be found here. In a March 31 webinar, Achieving the Dream will feature both winners.

    Is your institution or department tracking new KPIs related to student success, or using data in a new way? Tell us about it.

    Source link

  • Federal judge gives DOGE access to education data

    Federal judge gives DOGE access to education data

    The University of California Student Association’s request to block Department of Government Efficiency staffers from accessing student data at the Department of Education was denied Monday by a federal district judge. 

    The lawsuit, filed earlier this month, accused the department of illegally sharing confidential student data, arguing it violated the 1974 Privacy Act and confidentiality provisions of the Internal Revenue Code by giving DOGE access to records that contain tax information. 

    But Judge Randolph D. Moss of the District Court for the District of Columbia said there wasn’t an immediate threat, citing testimony from Adam Ramada, a DOGE staffer, who said that he and his team were only assisting the department with auditing for waste, fraud and abuse and that DOGE staffers understood the need to comply with data privacy laws. 

    “None of those initiatives should involve disclosure of any sensitive, personal information about any UCSA members,” Moss, an Obama appointee, wrote in his ruling. “The future injuries that UCSA’s members fear are, therefore, far from likely, let alone certain and great.”

    Other higher education groups have raised concerns about DOGE’s access to education data, as the department’s databases house students’ personal information, including dates of birth, contact information and Social Security numbers. Some student advocates worry the data could be illegally shared with other agencies and used for immigration enforcement. Moss, however, called those harms “entirely conjectural,” saying Ramada had attested that the data was not being used in such ways.

    Although the temporary restraining order was denied, the overall lawsuit will continue to work its way through the courts, and other legal challenges are emerging, The Washington Post reported.

    A coalition of labor unions, including the American Federation of Teachers, is also suing to block DOGE’s access to the sensitive data. This latest lawsuit argues that agencies—including Education, Labor and Personnel Management—are improperly disclosing the records of millions of Americans in violation of the Privacy Act.

    Source link

  • Embracing a growth mindset when reviewing student data

    Embracing a growth mindset when reviewing student data

    Key points:

    In the words of Carol Dweck, “Becoming is better than being.” As novice sixth grade math and English teachers, we’ve learned to approach our mid-year benchmark assessments not as final judgments but as tools for reflection and growth. Many of our students entered the school year below grade level, and while achieving grade-level mastery is challenging, a growth mindset allows us to see their potential, celebrate progress, and plan for further successes amongst our students. This perspective transforms data analysis into an empowering process; data is a tool for improvement amongst our students rather than a measure of failure.

    A growth mindset is the belief that abilities grow through effort and persistence. This mindset shapes how we view data. Instead of focusing on what students can’t do, we emphasize what they can achieve. For us, this means turning gaps into opportunities for growth and modeling optimism and resilience for our students. When reviewing data, we don’t dwell on weaknesses. We set small and achievable goals to help students move forward to build confidence and momentum.

    Celebrating progress is vital. Even small wins (i.e., moving from a kindergarten grade-level to a 1st– or 2nd-grade level, significant growth in one domain, etc.) are causes for recognition. Highlighting these successes motivates students and shows them that effort leads to results.

    Involving students in the process is also advantageous. At student-led conferences, our students presented their data via slideshows that they created after they reviewed their growth, identified their strengths, and generated next steps with their teachers. This allowed them to feel and have tremendous ownership over their learning. In addition, interdisciplinary collaboration at our weekly professional learning communities (PLCs) has strengthened this process. To support our students who struggle in English and math, we work together to address overlapping challenges (i.e., teaching math vocabulary, chunking word-problems, etc.) to ensure students build skills in connected and meaningful ways.

    We also address the social-emotional side of learning. Many students come to us with fixed mindsets by believing they’re just “bad at math” or “not good readers.” We counter this by celebrating effort, by normalizing struggle, and by creating a safe and supportive environment where mistakes are part of learning. Progress is often slow, but it’s real. Students may not reach grade-level standards in one year, but gains in confidence, skills, and mindset set the stage for future success, as evidenced by our students’ mid-year benchmark results. We emphasize the concept of having a “growth mindset,” because in the words of Denzel Washington, “The road to success is always under construction.” By embracing growth and seeing potential in every student, improvement, resilience, and hope will allow for a brighter future.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Fun with Participation Rate Data

    Fun with Participation Rate Data

    Just a quick one today, mostly charts.

    Back in the fall, StatsCan released a mess of data from the Labour Force Survey looking at education participation rates—that is, the percentage of any given age cohort that is attending education—over the past 25 years. So, let’s go see what it says.

    Figure 1 shows total education participation rates, across all levels of education, from age 15 to 29, for selected years over the past quarter century. At the two ends of the graph, the numbers look pretty similar. At age 15, we’ve always had 95%+ of our population enrolled in school (almost exclusively secondary education, and from age 26 and above, we’ve always been in the low-tweens or high single digits. The falling-off in participation is fairly steady: for every age-year above 17, about 10% of the population exits education up until the age of 26. The big increase in education enrolments that we’ve seen over the past couple of decades has really occurred in the 18-24 range, where participation rates (almost exclusively in universities, as we shall see) have increased enormously.

    Figure 1: Participation rates in Education (all institutions) by Age, Canada, select years 1999-00 to 2023-24

    Figure 2 shows current participation rates by age and type of postsecondary institution. People sometimes have the impression that colleges cater to an “older” clientele, but in fact, at any given age under 30, Canadians are much more likely to be enrolled in universities than in colleges. Colleges have a very high base in the teens because of the way the CEGEP system works in Quebec (I’ll come back to regional diversity in a minute), and it is certainly true that there is a very wide gap in favour of universities among Canadians in their mid-20s. But while the part rate gap narrows substantially at about age 25, it is never the case that the college participation rate surpasses the university one.

    Figure 2: Participation Rates by Age and Institution Type, Canada, 2023-24

    Figure 3 shows college participation rates by age over time. What you should take from this is that there has been a slight decline in college participation rates over time in the 19-23 age range, but beyond that not much has changed.

    Figure 3: College Participation Rates by Age, Selected Years, 1999-2000 to 2023-24

    Figure 4 uses the same lens as figure 3 only for universities. And it’s about as different as it can be. In 1999, fewer than one in ten Canadians aged 18 was in university: now it is three in ten. In 1999, only one in four 21 year-olds was in university, now it is four-in-ten. These aren’t purely the effects of increased demand; the elimination of grade 13 in Ontario had a lot to do with the changes for 18-year-olds; Alberta and British Columbia converting a number of their institutions from colleges to universities in the late 00s probably juices these numbers a bit, too. But on the whole, what we’ve seen is a significant increase in the rate at which young people are choosing to attend universities between the ages of 18 and 24. However, beyond those ages the growth is less pronounced. There was certainly growth in older student participation rates between 1999-00 and 20011-12, but since then none at all.

    Figure 4: University Participation Rates by Age, Selected Years, 1999-2000 to 2023-24

    So much for the national numbers: what’s going on at the provincial level? Well, because this is the Labour Force Survey, which unlike administrative data has sample size issues, we can’t quite get the same level of granularity of information. We can’t look at individual ages, but we can see age-ranges, in this case ages 20-24. In figures 5 and 6 (I broke them up so they are a bit easier to read), I show how each province’s university and college participation rates in 2000 vs. 2023.

    Figure 5: University Participation Rates for 20-24 Year-olds, Four Largest Provinces, 2000-01 vs. 2023-24

    Figure 6: University Participation Rates for 20-24 Year-olds, Six Remaining Provinces, 2000-01 vs. 2023-24

    Some key facts emerge from these two graphs:

    • The highest participation rates in the country are in Ontario, Quebec, and British Columbia.
    • In all provinces, the participation rate in universities is higher than it is for colleges, ranging from 2.5x in Quebec for over 4x in Saskatchewan.
    • Over the past quarter century, overall postsecondary participation rates and university participation rates have gone up in all provinces; Alberta and British Columbia alone have seen a decline in college participation rates, due to the aforementioned decision to convert certain colleges to university status in the 00s.
    • Growth in participation rates since 2000 has been universal but has been more significant in the country’s four largest provinces, where the average gain has been nine percentage points, and the country’s six smaller provinces, where the gain has been just under five percent.
    • Over twenty-five years, British Columbia has gone from ninth to second in the country in terms of university participation rates, while Nova Scotia has gone second to ninth.
    • New Brunswick has consistently been in last place for overall participation rates for the entire century.

    Just think: three minutes ago, you probably knew very little about participation rates in Canada by age and geography, now you know almost everything there is to know about participation rates in Canada by age and geography. Is this a great way to start your day or what?

    Source link

  • DOGE temporarily blocked from accessing Education Department student aid data

    DOGE temporarily blocked from accessing Education Department student aid data

    This audio is auto-generated. Please let us know if you have feedback.

    UPDATE: Feb. 12, 2025: The U.S. Department of Education on Tuesday agreed to temporarily block staffers of the Department of Government Efficiency, or DOGE, from accessing student aid information and other data systems until at least Feb. 17. 

    On that date, a federal judge overseeing the case is expected to rule on a student group’s request for a temporary restraining order to block the agency from sharing sensitive data with DOGE. 

    Dive Brief: 

    •  A group representing University of California students filed a lawsuit Friday to block the Elon Musk-led Department of Government Efficiency from accessing federal financial aid data.  
    • The University of California Student Association cited reports that DOGE members gained access to federal student loan data, which includes information such as Social Security numbers, birth dates, account information and driver’s license numbers. 
    • The complaint accuses the U.S. Department of Education of violating federal privacy laws and regulations by granting DOGE staffers access to the data. “The scale of intrusion into individuals’ privacy is enormous and unprecedented,” the lawsuit says. 

    Dive Insight: 

    President Donald Trump created DOGE through executive order on the first day of his second term, tasking the team, led by Tesla co-founder and Trump adviser Musk, with rooting out what the new administration deems as government waste. 

    DOGE has since accessed the data of several government agencies, sparking concerns that its staffers are violating privacy laws and overstepping the executive branch’s power. With the new lawsuit, the University of California Student Association joins the growing chorus of groups that say DOGE is flouting federal statutes. 

    One of those groups — 19 state attorneys general — scored a victory over the weekend. On Saturday, a federal judge temporarily blocked DOGE from accessing the Treasury Department’s payments and data system, which disburses Social Security benefits, tax returns and federal employee salaries. 

    The University of California Student Association has likewise asked the judge to temporarily block the Education Department from sharing sensitive data with DOGE staffers and to retrieve any information that has already been transferred to them. 

    The group argues that the Education Department is violating the Privacy Act of 1974, which says that government agencies may not disclose an individual’s data “to any person, or to another agency,” without their consent, except in limited circumstances. The Internal Revenue Code has similar protections for personal information. 

    “None of the targeted exceptions in these laws allows individuals associated with DOGE, or anyone else, to obtain or access students’ personal information, except for specific purposes — purposes not implicated here,” the lawsuit says. 

    The Washington Post reported on Feb. 3 that some DOGE team members had in fact gained access to “multiple sensitive internal systems, including federal financial aid data, as part of larger plans to carry out Trump’s goal to eventually eliminate the Education Department. 

    “ED did not publicly announce this new policy — what is known is based on media reporting — or attempt to justify it,” Friday’s lawsuit says. “Rather, ED secretly decided to allow individuals with no role in the federal student aid program to root around millions of students’ sensitive records.”

    In response to the Post’s Feb. 3 reporting, Musk on the same day posted on X that Trump “will succeed” in dismantling the agency. 

    Later that week, the Post reported that DOGE staffers were feeding sensitive Education Departmentdata into artificial intelligence software to analyze the agency’s spending. 

    The moves have also attracted lawmakers’ attention. Virginia Rep. Bobby Scott, the top-ranking Democrat on the House’s education committee, asked the Government Accountability Office on Friday to probe the security of information technology systems at the Education Department’s and several other agencies. 

    An Education Department spokesperson said Monday that the agency does not comment on pending litigation. 

    Source link