Tag: Data

  • Princeton president misunderstands FIRE data — and campus free speech

    Princeton president misunderstands FIRE data — and campus free speech

    The first step to solving a problem is admitting you have one. In his new book Terms of Respect: How Colleges Get Free Speech Right, Princeton University President Christopher L. Eisgruber reports on FIRE’s data on free speech and First Amendment norms on campus while making no effort to understand it and misusing the data of others. In other words, he’s skipped that first step — and now Princeton is tumbling down the staircase. 

    Eisgruber’s book makes many questionable claims, from dismissing good-faith critiques to muddying examples of censorship. But for our purposes here, let’s cabin our criticism to the nine pages of Chapter 5 that he devotes to dismissing data, including FIRE’s.

    Our research

    FIRE’s research — like all research — is imperfect, and we welcome criticism. Research isn’t about proving you’re right. It’s about stress-testing ideas to see what holds up. Scrutiny is how the process works, and it’s how the work gets better. 

    Our largest and most ambitious annual research project is the College Free Speech Rankings, which combines three factors: written speech policies, a national survey of student views on campus free expression, and outcomes from campus speech controversies. Reasonable minds can differ on how to weigh these factors, which is why we make all our data available to anyone who requests it. If someone believes these factors should be weighed differently, or has different factors they would like to include, they are welcome to do so, and to use our data.

    College Free Speech Rankings

    The College Free Speech Rankings is a comprehensive comparison of the student experience of free speech on their campuses.


    Read More

    We’re also transparent about our methodology. This year, we preregistered our methodology before our survey data came back, in part to make clear that we do not — and cannot — reverse-engineer outcomes to put certain schools on top or bottom.

    Every year when we release the report, we get feedback. We take the thoughtful critiques seriously and have often used them to improve our work. Again, feedback is part of the process. But not all feedback comes from a place of good faith.

    Bias or projection?

    Eisgruber introduces FIRE in a manner intended to discredit us, but that probably ends up saying more about his biases than any of ours:

    An organization called FIRE (the Foundation for Individual Rights and Expression) has probably done as much as any other entity to create the impression that free speech is under continuous assault on college campuses. FIRE is a nonprofit advocacy organization that describes itself as defending free speech and free thought for all Americans; it was founded in 1999 with a focus on campus speech issues and now receives a substantial portion of its funding from foundations often associated with conservative political causes.

    Eisgruber provides no footnote explaining or citing the conservative foundations to which he objects, when the “now” period started, or how “substantial” are those alleged funds. In reality, FIRE is funded by a very politically diverse cohort, and in the last fiscal year, 74% of our funding came from individual donors compared to 26% from foundation grants.

    Eisgruber’s implication is that FIRE is biased towards conservatives because we have conservative donors. (So does Princeton, and few would accuse it of being politically conservative.) He has to rely on these vague implications because if you look at the evidence, you have to contend with FIRE’s many cases on behalf of liberal students and professors. Or our lawsuit against the Trump administration. Or against the governments of Texas and Florida, in which we succeeded in blocking speech restrictions passed by deep-red legislatures.

    If he actually had any evidence that donors were influencing our research or casework, he’d have shown it. And with regard to our research, if the methodology and procedures are solid, it wouldn’t even matter if we were conservative, liberal, or from another planet entirely. If someone you hate tells you the sky is blue, the fact that you don’t like them is irrelevant to the truth or falsity of their statements. So he’s just tossing out the accusation and hoping that’s enough to bias his audience against us in the section that follows.

    Eisgruber then brings up FIRE’s supposed bias to praise another group’s research in a similar vein about free expression in the University of North Carolina system (more on that later):

    Unlike at FIRE and its kin, the researchers brought no discernible ideological or advocacy agenda to their work: The three original collaborators on the project included one self-identified conservative (McNeilly) and one self-identified liberal (Larson).

    If he had bothered to fact-check this claim by contacting FIRE, he would have found that our research department and those of us who work on the rankings share at least that level of political diversity (as does FIRE as a whole)! As for their indiscernible advocacy agenda, he may have missed their excellent recommendations for free expression:

    In sum, we recommend that efforts to improve the campus culture for free expression and constructive dialogue be holistic and attentive to the diverse contexts in which students encounter politics. Tangibly, we suggest that the UNC system encourage researchers from member institutions to review these data, conduct their own analyses, and develop campus-specific plans for creating and evaluating their own interventions.

    As agendas go, that’s a praiseworthy one to have, but it is an agenda.

    But while Eisgruber is quick to baselessly accuse FIRE of bias, in all his discussion of our findings, he never once pauses to consider his own biases. His defense of the status quo for free speech on campus is, not coincidentally, a defense of his own record as president. That’s a pretty big blind spot, and it shows. Even worse, his desire to justify himself leads to some exceptionally lazy takes on our research. 

    When ‘it’s not clear’ really means ‘I didn’t bother to look into it’

    Eisgruber takes issue with the methodology of FIRE’s Campus Deplatforming Database. He notes that before 2024, it was called the Disinvitation Database, and adds a footnote: “It is not clear what changed when the database expanded.” That’s not even close to correct, as we published a complete explanation about the changes on Feb. 8, 2024. It would be absurd for us to completely overhaul the methodology and purpose of our database without explaining those changes somewhere. That’s why we did explain it. He could have found this out with a simple Google search.

    One might be forgiven for missing this kind of mistake when writing a critique on X. It’s less excusable in the context of a book, for which he presumably had research assistance and certainly had an editor. (Or did he? Curiously, the same footnote also says that the database was “accessed November 17, 2025,” which, at the time of this writing, has not yet occurred.)

    As for the substance of his critique, Eisgruber calls the database a “hot mess,” claiming our inclusion criteria are too broad and that we “[conflate] disinvitation with deplatforming and censorship.” He never defines these terms, so it’s hard to know what distinction he thinks we missed. His example? He cites as “absurd” our decision to classify as a disinvitation attempt a situation in which NYU students tried to replace their commencement speaker, former Princeton President Shirley Tilghman, with someone more famous, followed by several similar efforts at Princeton.

    Reasonable minds can disagree on what such episodes mean, but by our stated methodology, they clearly count as deplatforming attempts: 

    A deplatforming attempt . . . is an attempt to prevent someone from expressing themselves in a public forum on campus. Deplatforming attempts include attempts to disinvite speakers from campus speeches or commencement ceremonies.

    That definition is public and consistent. It doesn’t depend on some subjective criterion for how “bad” we or Eisgruber think an incident was, or how justified students felt in opposing it. If Eisgruber wants to challenge our data, he could propose his own definition and see what share of our dataset fits it. Instead, he cherry-picks anecdotes he happens not to care about, and conveniently ignores more egregious examples.

    He also objects to the idea that disinvitations — even successful ones — can threaten free speech, arguing that FIRE “confuses the exercise of free speech with threats to free speech.” But that’s a false dichotomy. The exercise of free speech can absolutely threaten others’ ability to speak.

    As FIRE has noted on many occasions, calls for disinvitation are themselves protected speech — so are calls for violence in response to speech that don’t meet the bar for incitement. 

    Eisgruber agrees with FIRE that shoutdowns are never acceptable and are incompatible with free speech. But it’s hard to reconcile that with his position that disinvitation attempts can never threaten free speech. They often involve appeals to university authorities to shut down an event or speech. In other words, they are attempts by one group of people to decide for their peers what speech their peers will be able to hear, similar to a heckler’s veto.

    Eisgruber also presents a heckler’s veto from 1970 that doesn’t appear in our database, as if to prove that campus illiberalism didn’t start with Gen Z. Believe me, we’re aware. We’ve written plenty about McCarthy-era censorship and the Red Scare. Plus, FIRE was founded back in 1999, long before today’s version of the culture wars. Illiberalism on campus isn’t new, and we certainly wouldn’t argue that it is new after 25 years of fighting it. It just takes different forms in different eras — and we track it wherever it appears. The reason Eisgruber’s example wasn’t included in our database is simply that we made the decision to limit the database to incidents that occurred since FIRE’s founding.

    REPORT: Faculty members more likely to self-censor today than during McCarthy era

    Today, one in four faculty say they’re very or extremely likely to self-censor in academic publications, and over one in three do so during interviews or lectures — more than during and Second Red Scare and McCarthyism.


    Read More

    He praises Princeton for not having given in to a heckler’s veto since then: “Hickel got shouted down not by Gen Z but by members of an older generation that now criticizes young people for failing to respect free speech. Princeton students allowed every speaker in the next half century to have their say.” Unfortunately, this may have jinxed Princeton, as, apparently after Eisgruber’s manuscript was finalized, two speaking events at Princeton were disrupted.

    Survey critiques suggest he didn’t read our survey

    Eisgruber next tries to argue that concerns about self-censorship are overblown. He starts reasonably enough, noting that survey data can be tricky: 

    Polling data is, however, notoriously sensitive to sampling biases and small differences in the formulation of questions. Data about concepts such as free speech requires careful interpretation that it rarely gets.

    We agree! But then he cites FIRE’s 2021 finding that over 80% of college students self-censor at least sometimes, and 21% do so often, only to dismiss it: “Should we worry about these numbers? Not without more evidence and better poll questions.”

    What’s wrong with the poll question? He never says. He just moves on to talk about other surveys. So let’s stay on this one. What does he think about self-censorship? Well, as he defines it, he actually thinks it’s good:

    Indeed, I am most concerned about the substantial fraction of people who say they never self-censor. Do they really say everything that pops into their heads? . . . Of course people self-censor! Politeness, tact, and civility require it. And as we become more aware of the sensibilities of the diverse people around us, we may realize that we need to self-censor more often or differently than we did before.

    Do students share his conception of self-censorship as politeness or conscientious refusal to offend? Here’s how we have asked that question for the past four years:

    This next series of questions asks you about self-censorship in different settings. For the purpose of these questions, self-censorship is defined as follows: Refraining from sharing certain views because you fear social (exclusion from social events), professional (losing job or promotion), legal (prosecution or fine), or violent (assault) consequences, whether in person or remotely (by phone or online), and whether the consequences come from state or non-state sources.

    Q: How often do you self-censor during conversations with other students on campus?

    Q: How often do you self-censor during conversations with your professors?

    Q: How often do you self-censor during classroom discussions?

    • Never

    • Rarely
    • Occasionally, once or twice a month
    • Fairly often, a couple times a week
    • Very often, nearly every day

    As you can see, this isn’t asking about garden-variety tact or politeness. To be fair to Eisgruber, we didn’t provide this definition when we asked the question in 2021 (though he should have sought the most recent data; that he did not is itself strange). Unfortunately for him, since adding this clarifying definition, the portion of students who self-censor at least rarely has increased to 91-93%, depending on the context, and those reporting that they often self-censor now stand at 24-28%.

    In other words, a quarter of university students in America regularly silence themselves out of fear of social, professional, legal, or violent consequences. As for his request for “more evidence,” the responses are dire year after year. Maybe Eisgruber still thinks that’s fine, but we don’t. 

    Support for violence and shoutdowns is worse than he admits

    Eisgruber also downplays how many students think it’s acceptable to use violence or shoutdowns to silence speakers, and tries to hand-wave away data in an explanation that utterly mangles First Amendment law:

    One explanation highlights ambiguities in the survey questions. For example, American free speech law agrees with students who say that it is “rarely” or “sometimes” acceptable to stop people from talking. Not all speech is protected. If, for example, speakers are about to shout “fire” falsely in a crowded theater, or if they are preparing to incite imminent violence, you may and should do what you can to (in the words of the poll question) “prevent them from talking.”

    We would be remiss to pass up an opportunity to once again address the immortal, zombie claim that you can’t shout “fire” in a crowded theater. Eisgruber did better than many others by including “falsely,” but it’s still incomplete and misleading (did a panic occur? Was it likely or intended? These questions matter) and has been for a very long time. It’s dispiriting to see it come from the president of an Ivy League university — one who has a law degree, no less. But also, the fact that you as a listener think someone might be about to engage in unprotected speech doesn’t mean you should dole out vigilante justice to prevent it. If you do, you’ll probably go to jail.

    Different wording, same story: growing student support for violence and shoutdowns shows campus free speech is in danger.

    But leaving that aside, what of his contention that the high levels of support are just an artifact of the “prevent them from talking” wording? Well, here’s the wording of our latest poll question on that subject:

    How acceptable would you say it is for students to engage in the following actions to protest a campus speaker?

    Q: Shouting down a speaker to prevent them from speaking on campus.

    Q: Blocking other students from attending a campus speech.

    Q: Using violence to stop a campus speech.

    • Always acceptable
    • Sometimes acceptable
    • Rarely acceptable
    • Never acceptable

    With this different wording, we find 71% at least “rarely” accept shoutdowns, 54% at least “rarely” support blocking, and 34% at least “rarely” support violence. Different wording, same story: growing student support for violence and shoutdowns shows campus free speech is in danger. 

    It’s important to note that Eisgruber offers only quibbles with question wording, and offers theories for how students may be interpreting questions. He doesn’t offer competing data. While that might be understandable for the typical social media critic, if all this could be debunked by “better poll questions,” no one is in a better position to commission said research (at least on his or her campus) than the president of a university. Instead of offering unconvincing dismissals of existing data, he could have contributed to the body of knowledge with his “better” questions. We still encourage him to do so. Seriously. Please run a free speech survey at Princeton.

    As much as FIRE or Eisgruber may wish these poll numbers were different, we need to deal with the world as it is.

    Refuting FIRE data with . . . data that agree with FIRE’s data

    So what data does Eisgruber use to support his case that the situation on campus is rosier than FIRE’s data suggests? As mentioned earlier, he turns to a study of the UNC system called “Free Expression and Constructive Dialogue in the University of North Carolina.” We were darkly amused by this because FIRE Chief Research Advisor Sean Stevens, who heads up our College Free Speech Rankings survey, was approached by that study’s authors based on his work on surveys for FIRE and Heterodox Academy — and they consulted with Stevens about what questions to include in their survey. Here’s Eisgruber:

    I believe, however, that the analysis by Ryan, Engelhardt, Larson, and McNeilly accurately describes most colleges and universities. Certainly it chimes with my own experiences at Princeton. 

    This could be in a textbook next to “confirmation bias.” The data that jibes with his experience he sees as more trustworthy. Yet this survey does not refute FIRE’s findings, but is perfectly compatible with them. The rosy finding upon which Eisgruber puts a lot of weight is their finding that faculty do not push political agendas in class. This isn’t an area that FIRE studies, so it’s not a refutation of our work. More importantly, it’s not asking the same question.

    Eisgruber goes on:

    There is another reason why the North Carolina study’s conclusions are plausible. They mesh with and reflect broader, well-documented trends in American political life. A mountain of evidence shows that over the past several decades, and especially in the twenty-first century, political identities have hardened.

    But FIRE’s data is also perfectly compatible with the idea of increasing polarization. It’s hard, therefore, even to find the disagreement to which he’s pointing when he says their data is good and our data is bad.

    The UNC survey, like ours, found “campuses do not consistently achieve an atmosphere that promotes free expression” and “students who identify as conservative face distinctive challenges.” This is fully compatible with our data. It’s not clear where Eisgruber finds meaningful disagreement, and to the extent he frames this data as hopeful, it seems to misinterpret the authors’ findings.

    Even if the data coming out of UNC schools were wildly different from our national-level data, it would be a mistake to take it as representative of the nation as a whole. The mistake, specifically, would be cherry-picking. Six of the seven UNC schools that we rank are in the top 20 of our College Free Speech Rankings. The most amusing part, from a FIRE perspective, is that this is not a coincidence. Those six each worked with FIRE’s policy reform team and achieved our highest “green light” rating for free speech, and have implemented programming to support free expression on campus. Indeed, since the early days of FIRE’s speech code ratings, FIRE has made a special effort to evaluate the speech codes of all of the UNC System schools, even the smaller ones, thanks to a partnership with the state’s James G. Martin Center for Academic Renewal (then called the Pope Center). So if UNC campuses are far more likely to have a “green light” than the rest of the nation, that’s in significant part because of FIRE’s ongoing work. Princeton, in comparison, receives FIRE’s lowest “red light” rating.

    If anything, the UNC schools provide evidence that the way to improve free speech on campus is to address it head-on, rather than grasp about for some explanation to justify the current state of affairs. Speaking of which:

    Don’t be like Eisgruber — real leaders listen

    In the process of writing this piece, we received word of a very different response to FIRE data from administrators at Wellesley College:

    “Both FIRE stats and our own research, in some ways, have been similar,” said [Wellesley Vice President of Communications and Public Affairs Tara] Murphy. “We are taking this seriously.”

    In November [2024], Wellesley commissioned Grand River Solutions to conduct a survey on civil discourse among students. Out of 2,281 students invited to participate, 668 responded to at least one of the three questions, yielding a 29% response rate. The data was similar to the FIRE report: 36.8% of respondents said they felt either “very reluctant or somewhat reluctant” to share their views on controversial topics in the classroom, and 30% felt similarly hesitant outside of class. 

    That’s the kind of response we hope for. If campuses aren’t sure that FIRE has it right, they should be getting their own data so that they can address any campus free speech problems that the data may bear out.  

    We’re happy to report that in that sense FIRE’s rankings have been extremely successful. Many schools have reached out and worked with us to improve their policies and begin to implement programming to support free speech on campus. As dire as some of the stats can appear to be, FIRE has seen green shoots in the form of faculty and administrators who recognize the problem and want to do something about it.

    Our research deserves, and has, more thoughtful critics. Princeton’s community deserves a president who is more curious about what’s happening on his campus, and serious about improving the environment for free speech. Maybe it’s a coincidence that the academic experience that ultimately led Alan Charles Kors and Harvey Silverglate to found FIRE began when they met during their freshman year at … Princeton University. Or maybe it’s not. 

    If finding out ever becomes a priority for Eisgruber, we’d be happy to help.

    Source link

  • The Student Satisfaction Inventory: Data to Capture the Student Experience

    The Student Satisfaction Inventory: Data to Capture the Student Experience

    Student Satisfaction Inventory: Female college student carrying a notebook
    Satisfaction data provides insights across the student experience.

    The Student Satisfaction Inventory (SSI) is the original instrument in the family of Satisfaction-Priorities Survey instruments.  With versions that are appropriate for four-year public/private institutions and two-year community colleges, the Student Satisfaction Inventory provides institutional insight and external national benchmarks to inform decision-making on more than 600 campuses across North America. 

    With its comprehensive approach, the Student Satisfaction Inventory gathers feedback from current students across all class levels to identify not only how satisfied they are, but also what is most important to them. Highly innovative when it first debuted in the mid-1990’s, the approach has now become the standard in understanding institutional strengths (areas of high importance and high satisfaction) and institutional challenges (areas of high importance and low satisfaction).

    With these indicators, college leaders can celebrate what is working on their campus and target resources in areas that have the opportunity for improvement. By administering one survey, on an annual or every-other-year cycle, campuses can gather student feedback across the student experience, including instructional effectiveness, academic advising, registration, recruitment/financial aid, plus campus climate and support services, and track how satisfaction levels increase based on institutional efforts.

    Along with tracking internal benchmarks, the Student Satisfaction Inventory results provide comparisons with a national external norm group of like-type institutions to identify where students are significantly more or less satisfied than students nationally (the national results are published annually). In addition, the provided institutional reporting offers the ability to slice the data by all of the standard and customizable demographic items to provide a clearer approach for targeted initiatives. 

    Like the Adult Student Priorities Survey and the Priorities Survey for Online Learners (the other survey instruments in the Satisfaction-Priorities Surveys family), the data gathered by the Student Satisfaction Inventory can support multiple initiatives on campus, including to inform student success efforts, to provide the student voice for strategic planning, to document priorities for accreditation purposes and to highlight positive messaging for recruitment activities. Student satisfaction has been positively linked with higher individual student retention and higher institutional graduation rates, getting right to the heart of higher education student success. 

    Sandra Hiebert, director of institutional assessment and academic compliance at McPherson College (KS) shares, “We have leveraged what we found in the SSI data to spark adaptive challenge conversations and to facilitate action decisions to directly address student concerns. The process has engaged key components of campus and is helping the student voice to be considered. The data and our subsequent actions were especially helpful for our accreditation process.”

    See how you can strengthen student success with the Student Satisfaction Inventory

    Learn more about best practices for administering the online Student Satisfaction Inventory at your institution, which can be done any time during the academic year on your institution’s timeline.

    Source link

  • Higher education data explains why digital ID is a good idea

    Higher education data explains why digital ID is a good idea

    Just before the excitement of conference season, your local Facebook group lost its collective mind. And it shows no sign of calming down.

    Given everything else that is going on, you’d think that reinforcing the joins between key government data sources and giving more visibility to the subjects of public data would be the kind of nerdy thing that the likes of me write about.

    But no. Somebody used the secret code word. ID Cards.

    Who is she and what is she to you?

    I’ve written before about the problems our government faces in reliably identifying people. Any entitlement– or permission– based system needs a clear and unambiguous way of assuring the state that a person is indeed who they claim they are, and have the attributes or documentation they claim to.

    As a nation, we are astonishingly bad at this. Any moderately serious interaction with the state requires a parade of paperwork – your passport, driving license, birth certificate, bank statement, bank card, degree certificate, and two recent utility bills showing your name and address. Just witness the furore over voter ID – to be clear a pointless idea aimed at solving a problem that the UK has never faced – and the wild collection of things that you might be allowed to pull out of your voting day pocket that do not include a student ID.

    We are not immune from this problem in higher education. I’ve been asking for years why you need to apply to a university via UCAS, and apply for funding via the Student Loans Company, via two different systems. It’s then never been clear to me why you then need to submit largely similar information to your university when you enroll.

    Sun sign

    Given that organs of the state have this amount of your personal information, it is then alarming that the only way it can work out what you earn after graduating is by either asking you directly (Graduate Outcomes) or by seeing if anyone with your name, domicile, and date of birth turns up in the Inland Revenue database.

    That latter one – administrative matching – is illustrative of the government’s current approach to identity. If it can find enough likely matches of personal information in multiple government databases it can decide (with a high degree of confidence) that records refer to the same person.

    That’s how they make LEO data. They look for National Insurance Number (NINO), forename, surname, date of birth, postcode, and sex in both HESA student records and the Department for Work and Pension’s Customer Information System (which itself links to the tax database). Keen Wonkhe readers will have spotted that NINO isn’t returned to HESA – to get this they use “fuzzy matching” with personal data from the Student Loans Company, which does. The surname thing is even wilder – they use a sound-based algorithm (SOUNDEX) to allow for flexibility on spellings.

    This kind of nonsense actually has a match rate of more than 90 per cent (though this is lower for ethnically Chinese graduates because sometimes forenames and surnames can switch depending on the cultural knowledge of whoever prepared the data).

    It’s impressive as a piece of data engineering. But given that all of this information was collected and stored by arms of the same government it is really quite poor.

    The tale of the student ID

    Another higher education example. If you were ever a student you had a student ID. It was printed on your student card, and may have turned up on various official documents too. Perhaps you imagined that every student in the UK had a student number, and that there was some kind of logic to the way that they were created, and that there was a canonical national list. You would be wrong.

    Back in the day, this would have been a HESA ID, itself created from your UCAS number and your year of entry (or your year of entry, HESA provider ID, and an internal reference number if you applied directly). Until just a few years ago, the non-UCAS alternative was in use for all students – even including the use of the old HESA provider ID rather than the more commonly used UKPRN. Why the move away from UCAS – well, UCAS had changed how they did identifiers and HESA’s systems couldn’t cope.

    You’re expecting me to say that things are far more sensible now, but no. They are not. HESA has finally fixed the UKPRN issue within a new student ID field (SID). This otherwise replicates the old system but with one important difference: it is not persistent.

    Under the old approach, the idea was you had one student number for life – if you did an undergraduate degree at Liverpool, a masters at Manchester Met, and a PhD at Royal Holloway these were all mapped to the same ID. There was even a lookup service for new providers if the student didn’t have their old number. I probably don’t even need to tell you why this is a good idea if you are interested – in policy terms – in the paths that students within their career in higher education. These days we just administratively match if we need to. Or – as in LEO – assume that the last thing a student studied was the key to or cause of their glittering or otherwise career.

    The case of the LLE

    Now I hear what you might be thinking. These are pretty terrible examples, but they are just bodges – workarounds for bad decisions made in the distant past. But we have the chance to get it right in the next couple of years.

    The design of the Lifelong Learning Entitlement means that the government needs tight and reliable information about who does what bit of learning in order that funds can be appropriately allocated. So you’d think that there would be a rock-solid, portable, unique learner number underpinning everything.

    There is not. Instead, we appear to be standardising on the Student Loans Company customer reference number. This is supposed to be portable for life, but it doesn’t appear in any other sector datasets (the “student support number” is in HESA, but that is somehow different – you get two identifiers from SLC, lucky you). SLC also holds your NINO (you need one to get funding!), and has capacity to hold another additional number of an institution’s choice, but not (routinely) your HESA student ID or your UCAS identifier.

    There’s also space to add a Unique Learner Number (ULN) but at this stage I’m too depressed to go into what a missed opportunity that is.

    Why is standardising on a customer reference number not a good idea? Well, think of all the data SLC doesn’t hold but HESA does. Think about being able to refer easily back to a school career and forward into working life on various government data. Think about how it is HESA data and not SLC data that underpins LEO. Think about the palaver I have described above and ask yourself why you wouldn’t fix it when you had the opportunity.

    Learning to love Big Brother

    I’ll be frank, I’m not crazy about how much the government knows about me – but honestly compared to people like Google, Meta, or – yikes – X (formerly twitter) it doesn’t hugely worry me.

    I’ve been a No2ID zealot in my past (any employee of those three companies could tell you that) but these days I am resigned to the fact that people need to know who I am, and I’d rather be more than 95 per cent confident that they could get it right.

    I’m no fan of filling in forms, but I am a fan of streamlined and intelligent administration.

    So why do we need ID cards? Simply because in proper countries we don’t need to go through stuff like this every time we want to know if a person that pays tax and a person that went to university are the same person. Because the current state of the art is a mess.

    Source link

  • Higher ed groups blast Trump plan to expand applicant data collection

    Higher ed groups blast Trump plan to expand applicant data collection

    This audio is auto-generated. Please let us know if you have feedback.

    More than three dozen higher education organizations, led by the American Council on Education, are urging the Trump administration to reconsider its plan to require colleges to submit years of new data on applicants and enrolled students, disaggregated by race and sex.

    As proposed, the reporting requirements would begin on Dec. 3., giving colleges just 17 weeks to provide extensive new admissions data, ACE President Ted Mitchell wrote in an Oct. 7 public comment. Mitchell argued that isn’t enough time for most colleges to effectively comply and would lead to significant errors.

    ACE’s comment came as part of a chorus of higher education groups and colleges panning the proposal. The plan’s public comment period ended Tuesday, drawing over 3,000 responses.

    A survey conducted by ACE and the Association for Institutional Research found that 91% of polled college leaders expressed concern about the proposed timeline, and 84% said they didn’t have the resources and staff necessary to collect and process the data.

    Delaying new reporting requirements would leave time for necessary trainings and support services to be created, Mitchell said. The Education Department — which has cut about half its staff under President Donald Trump — should also ensure that its help desk is fully crewed to assist colleges during implementation, Mitchell said.

    Unreliable and misleading data?

    In August, Trump issued a memo requiring colleges to annually report significantly more admissions data to the National Center for Education Statistics, which oversees the Integrated Postsecondary Education Data System.

    The Education Department’s resulting proposal would require colleges to submit six years’ worth of undergraduate and graduate data in the first year of the IPEDS reporting cycle, including information on standardized test scores, parental education level and GPA. 

    In a Federal Register notice, the Education Department said this information would increase transparency and “help to expose unlawful practices″ at colleges. The initial multi-year data requirement would “establish a baseline of admissions practices” before the U.S. Supreme Court’s 2023 ruling against race-conscious admissions, it said.

    But the department’s proposal and comments have caused unease among colleges, higher ed systems and advocacy groups in the sector.

    “While we support better data collection that will help students and families make informed decisions regarding postsecondary education, we fear that the new survey component will instead result in unreliable and misleading data that is intended to be used against institutions of higher education,” Mitchell said in the coalition’s public comment.

    The wording of the data collection survey — or lack thereof — also raised some red flags.

    Mitchell criticized the Trump administration for introducing the plan without including the text of the proposed questions. Without having the actual survey to examine, “determining whether the Department is using ‘effective and efficient’ statistical survey methodology seems unachievable,” he said.

    The Education Department said in the Federal Register notice that the additional reporting requirements will likely apply to four-year colleges with selective admissions processes, contending their admissions and scholarships “have an elevated risk of noncompliance with the civil rights laws.”

    During the public comment period, the department specifically sought feedback on which types of colleges should be required to submit the new data.

    The strain on institutions ‘cannot be overstated’

    Several religious colleges voiced concerns about the feasibility of completing the Education Department’s proposed request without additional manpower.

    “Meeting the new requirements would necessitate developing new data extracts, coding structures, validation routines, and quality assurance checks — all while maintaining existing reporting obligations,” Ryon Kaopuiki, vice president for enrollment management at the University of Indianapolis, said in a submitted comment. 

    Source link

  • Outcomes data for subcontracted provision

    Outcomes data for subcontracted provision

    In 2022–23 there were around 260 full time first degree students, registered to a well-known provider and taught via a subcontractual arrangement, that had a continuation rate of just 9.8 per cent: so of those 260 students, just 25 or so actually continued on to their second year.

    Whatever you think about franchising opening up higher education to new groups, or allowing established universities the flexibility to react to fast-changing demand or skills needs, none of that actually happens if more than 90 per cent of the registered population doesn’t continue with their course.

    It’s because of issues like this that we (and others) have been badgering the Office for Students to produce outcomes data for students taught via subcontractual arrangements (franchises and partnerships) at a level of granularity that shows each individual subcontractual partner.

    And finally, after a small pilot last year, we have the data.

    Regulating subcontractual relationships

    If anything it feels a little late – there are now two overlapping proposals on the table to regulate this end of the higher education marketplace:

    • A Department of Education consultation suggests that every delivery partner that has more than 300 higher education students would need to register with the Office for Students (unless it is regulated elsewhere)
    • And an Office for Students consultation suggests that every registering partner with more than 100 higher education students taught via subcontractual arrangements will be subject to a new condition of registration (E8)

    Both sets of plans address, in their own way, the current reality that the only direct regulatory control available over students studying via these arrangements is via the quality assurance systems within the registering (lead) partners. This is an arrangement left over from previous quality regimes, where the nation spent time and money to assure itself that all providers had robust quality assurance systems that were being routinely followed.

    In an age of dashboard-driven regulation, the fact that we have not been able to easily disaggregate the outcomes of subcontractual students has meant that it has not been possible to regulate this corner of the sector – we’ve seen rapid growth of this kind of provision under the Office for Students’ watch and oversight (to be frank) has just not been up to the job.

    Data considerations

    Incredibly, it wasn’t even the case that the regulator had this data but chose not to publish it. OfS has genuinely had to design this data collection from scratch in order to get reliable information – many institutions expressed concern about the quality of data they might be getting from their academic partners (which should have been a red flag, really).

    So what we get is basically an extension of the B3 dashboards where students in the existing “partnership” population are assigned to one of an astonishing 681 partner providers alongside their lead provider. We’d assume that each of these specific populations has data across the three B3 (continuation, completion, progression) indicators – in practice many of these are suppressed for the usual OfS reasons of low student numbers and (in the case of progression) low Graduate Outcomes response rates.

    Where we do get indicator values we also see benchmarks and the usual numeric thresholds – the former indicating what OfS might expect to see given the student population, the latter being the line beneath which the regulator might feel inclined to get stuck into some regulating.

    One thing we can’t really do with the data – although we wanted to – is treat each subcontractual provider as if it was a main provider and derive an overall indicator for it. Because many subcontractual providers have relationships (and students) from numerous lead providers, we start to get to some reasonably sized institutions. Two – Global Banking School and the Elizabeth School London – appear to have more than 5,000 higher education students: GBS is around the same size as the University of Bradford, the Elizabeth School is comparable to Liverpool Hope University.

    Size and shape

    How big these providers are is a good place to start. We don’t actually get formal student numbers for these places – but we can derive a reasonable approximation from the denominator (population size) for one of the three indicators available. I tend to use continuation as it gives me the most recent (2022–23) year of data.

    [Full screen]

    The charts showing numbers of students are based on the denominators (populations) for one of the three indicators – by default I use continuation as it is more likely to reflect recent (2022–23) numbers. Because both the OfS and DfE consultations talk about all HE students there are no filters for mode or level.

    For each chart you can select a year of interest (I’ve chosen the most recent year by default) or the overall indicator (which, like on the main dashboards is synthetic over four years) If you change the indicator you may have to change the year. I’ve not included any indications of error – these are small numbers and the possible error is wide so any responsible regulator would have to do more investigating before stepping in to regulate.

    Recall that the DfE proposal is that institutions with more than 300 higher education students would have to register with OfS if they are not regulated in another way (as a school, FE college, or local authority, for instance). I make that 26 with more than 300 students, a small number of which appear to be regulated as an FE college.

    You can also see which lead providers are involved with each delivery partner – there are several that have relationships with multiple universities. It is instructive to compare outcomes data within a delivery partner – clearly differences in quality assurance and course design do have an impact, suggesting that the “naive university hoodwinked by low quality franchise partner” narrative, if it has any truth to it at all, is not universally true.

    [Full screen]

    The charts showing the actual outcomes are filtered by mode and level as you would expect. Note that not all levels are available for each mode of study.

    This chart brings in filters for level and mode – there are different indicators, benchmarks, and thresholds for each combination of these factors. Again, there is data suppression (low numbers and responses) going on, so you won’t see every single aspect of every single relationship in detail.

    That said, what we do see is a very mixed bag. Quite a lot of provision sits below the threshold line, though there are also some examples of very good outcomes – often at smaller, specialist, creative arts colleges.

    Registration

    I’ve flipped those two charts to allow us to look at the exposure of registered universities to this part of the market. The overall sizes in recent years at some providers won’t be of any surprise to those who have been following this story – a handful of universities have grown substantially as a result of a strategic decision to engage in multiple academic partnerships.

    [Full screen]

    Canterbury Christ Church University, Bath Spa University, Buckinghamshire New University, and Leeds Trinity University have always been the big four in this market. But of the 84 registered providers engaged in partnerships, I count 44 that met the 100 student threshold for the new condition of registration B3 had it applied in 2022–23.

    Looking at the outcomes measures suggests that what is happening across multiple partners is not offering wide variation in performance, although there will always be teaching provider, subject, and population variation. It is striking that places with a lot of different partners tend to get reasonable results – lower indicator values tend to be found at places running just one or two relationships, so it does feel like some work on improving external quality assurance and validation would be of some help.

    [Full screen]

    To be clear, this is data from a few years ago (the most recent available data is from 2022–23 for continuation, 2019–20 for completion, and 2022–23 for progression). It is very likely that providers will have identified and addressed issues (or ended relationships) using internal data long before either we or the Office for Students got a glimpse of what was going on.

    A starting point

    There is clearly a lot more that can be done with what we have – and I can promise this is a dataset that Wonkhe is keen to return to. It gets us closer to understanding where problems may lie – the next phase would be to identify patterns and commonalities to help us get closer to the interventions that will help.

    Subcontractual arrangements have a long and proud history in UK higher education – just about every English provider started off in a subcontractual arrangement with the University of London, and it remains the most common way to enter the sector. A glance across the data makes it clear that there are real problems in some areas – but it is something other than the fact of a subcontractual arrangement that is causing them.

    Do you like higher education data as much as I do? Of course you do! So you are absolutely going to want to grab a ticket for The Festival of Higher Education on 11-12 November – it’s Team Wonkhe’s flagship event and data discussion is actively encouraged. 

    Source link

  • Supporting Transfer Student Success Through Data

    Supporting Transfer Student Success Through Data

    Transfer students often experience a range of challenges transitioning from a community college to a four-year institution, including credit loss and feeling like they don’t belong on campus.

    At the University of California, Santa Barbara, 30 percent of incoming students are transfers. More than 90 percent of those transfers come from California community colleges and aspire to complete their degree in two years.

    While many have achieved that goal, they often lacked time to explore campus offerings or felt pressured to complete their degree on an expedited timeline, according to institutional data.

    “Students feel pressure to complete in two years for financial reasons and because that is the expectation they receive regarding four-year graduation,” said Linda Adler-Kassner, associate vice chancellor of teaching and learning. Transfer students said they don’t want to “give up” part of their two years on campus to study away, she said.

    Institutional data also revealed that their academic exploration opportunities were limited, with fewer transfers participating in research or student groups, which are identified as high-impact practices.

    As a result, the university created a new initiative to improve transfer student awareness of on-campus opportunities.

    Getting data: UCSB’s institutional research planning and assessment division conducts an annual new student survey, which collects information on students’ demographic details, academic progress and outside participation or responsibilities. The fall 2024 survey revealed that 26 percent of transfers work for pay more than 20 hours per week; an additional 40 percent work between 10 and 20 hours per week. Forty-four percent of respondents indicated they do not participate in clubs or student groups.

    In 2024, the Office of Teaching and Learning conducted a transfer student climate study to “identify specific areas where the transfer student experience could be more effectively supported,” Adler-Kassner said. The OTL at UCSB houses six units focused on advancing equity and effectively supporting learners.

    The study found that while transfers felt welcomed at UCSB, few were engaging in high-impact practices and many had little space in their schedules for academic exploration, “which leads them to feel stress as they work on a quick graduation timeline,” Adler-Kassner said.

    Put into practice: Based on the results, OTL launched various initiatives to make campus stakeholders aware of transfer student needs and create effective interventions to support their success.

    Among the first was the Transfer Connection Project, which surveys incoming transfer students to identify their interests. OTL team members use that data to match students’ interests with campus resources and generate a personalized letter that outlines where the student can get plugged in on campus. In fall 2025, 558 students received a personal resource guide.

    The data also showed that a majority—more than 60 percent—of transfers sought to enroll in four major programs: communications, economics, psychological and brain sciences, and statistics and data science.

    In turn, OTL leaders developed training support for faculty and teaching assistants working in these majors to implement transfer-focused pedagogies. Staff also facilitate meet-and-greet events for transfers to meet department faculty.

    This work builds on the First Generation and Transfer Scholars Welcome, which UCSB has hosted since 2017. The welcome event includes workshops, a research opportunity fair and facilitated networking to get students engaged early.

    The approach is unique because it is broken into various modules that, when combined, create a holistic approach to student support, Adler-Kassner said.

    Gauging impact: Early data shows the interventions have improved student success.

    Since beginning this work, UCSB transfer retention has grown from 87 percent in 2020 to 94 percent in 2023. Similarly, graduation rates increased 10 percentage points from 2020 to 2024. Adler-Kassner noted that while this data may be correlated with the interventions, it does not necessarily demonstrate causation.

    In addition, the Transfer Student Center reaches about 40 percent of the transfer student population each year, and institutional data shows that those who engage with the center have a four-percentage-point higher retention rate and two-point higher graduation rate than those who don’t.

    Do you have an intervention that might help others promote student success? Tell us about it.

    This article has been updated to correct the share of incoming students that are transfers at UCSB.

    Source link

  • Student suicides: why stable data still demand urgent reform 

    Student suicides: why stable data still demand urgent reform 

    Author:
    Emma Roberts

    Published:

    This HEPI guest blog was kindly authored by Emma Roberts, Head of Law at the University of Salford. 

    New figures from the Office for National Statistics (ONS) show that student suicide rates in England and Wales for the period 2016 to 2023 remain stable – but stability is no cause for complacency. The age-adjusted suicide rate among higher education students stands at 6.9 deaths per 100,000, compared with 10.2 per 100,000 for the general population of the same age group. Over the seven years of data collection, there were 1,163 student deaths by suicide – that is around 160 lives lost every year. 

    The rate being lower than the wider population is encouraging and may reflect the investment the sector has made in recent years. Universities have developed more visible wellbeing services, invested in staff training and created stronger cultures of awareness around mental health. The relative stability in the data can be seen as evidence that these interventions matter. But stability is not a resolution. Each student suicide is a preventable tragedy. The data should therefore be read not as reassurance, but as a call to sustain momentum and prepare for the challenges that lie ahead. 

    What the ONS data tells us 

    The figures highlight some familiar patterns. Male students remain at significantly higher risk than female students, accounting for nearly two-thirds of all suicides. Undergraduate students are at greater risk than postgraduate students, while students living at home have the lowest suicide rate. The data also shows that rates among White students are higher than for Black or Asian students, though the sample sizes are small, so these figures may be less reliable. 

    In terms of trend, the highest rate was recorded in the 2019 academic year (8.8 per 100,000). Since then, the rate has fallen back but remains stubbornly consistent, with 155 deaths recorded in the most recent year. The ONS notes that these figures are subject to revision due to coroner delays, meaning even the latest year may be under-reported. 

    The key point is that the problem is not worsening, but it is also not going away. 

    A changing student demographic 

    This year’s recruitment trends have introduced a new variable. Several high-tariff providers (universities with the highest entry requirements) have reduced entry requirements in order to secure numbers. This can open up opportunities for students who might otherwise not have had access to selective institutions. But it does raise important questions about preparedness. 

    Students admitted through lower tariffs may bring with them different kinds of needs and pressures: greater financial precarity, additional academic transition challenges, or less familiarity with the social and cultural capital that selective universities sometimes assume. These are all recognised risk factors for stress, isolation and, in some cases, mental ill-health. Universities with little prior experience of supporting this demographic may find their existing systems under strain. 

    Building on progress, not standing still 

    Much good work is already being done. Many universities have strengthened their partnerships with local National Health Service (NHS) trusts, introduced proactive wellbeing campaigns and embedded support more visibly in the student journey. We should recognise and celebrate this progress. 

    At the same time, the ONS data is a reminder that now is not the moment to stand still. Stability in the numbers reflects the effort made – but it should also prompt us to ask whether our systems are sufficiently flexible and resilient to meet new pressures. The answer, for some institutions, may well be yes. For others, particularly those adapting to new student demographics, there is a real risk of being caught unprepared. 

    What needs to happen next 

    There are several constructive steps the sector can take: 

    • Stress-test provision:  
      Assess whether wellbeing and safeguarding structures are designed to support the needs of the current, not historic, intake. 
    • Broaden staff capacity:  
      Ensure that all staff, not just specialists, have the awareness and training to spot early warning signs so that distress does not go unnoticed. 
    • Strengthen partnerships:  
      Align more closely with local NHS and community services to prevent students falling between two in-demand systems. 
    • Share practice sector-wide:  
      Collectively learn across the sector. Good practice must be disseminated, not siloed. 

    These are not dramatic or expensive interventions. They are achievable and pragmatic steps that can reduce risk while broader debates about legal and regulatory reform continue

    Conclusion 

    The ONS data shows that student suicide is not escalating. But the rate remains concerningly consistent at a level that represents an unacceptable loss of life each year. The progress universities have made should be acknowledged, but the danger of complacency is real. As recruitment patterns shift and new student demographics emerge, the sector must ensure that safeguarding and wellbeing systems are ready to adapt. 

    Every statistic represents a life lost. Stability must not become complacency – it should be a call to action, a chance to consolidate progress, anticipate new challenges and keep the prevention of every avoidable death at the heart of institutional priorities. 

    Source link

  • UNC Merges Information and Data Science Schools, Names New AI Vice Provost

    UNC Merges Information and Data Science Schools, Names New AI Vice Provost

    Manning Hall at University of North Carolina Chapel HillUNCThe University of North Carolina at Chapel Hill announced last week that it will merge the School of Information and Library Science and the School of Data Science and Society into a single, yet-to-be-named institution focused on applied technology, information science and artificial intelligence.

    The merger, announced in a joint letter from Chancellor Lee H. Roberts and Interim Executive Vice Chancellor and Provost James W. Dean Jr., represents what administrators called “a bold step forward” in positioning Carolina as a national leader in data and AI education.

    Dr. Stanley Ahalt, current dean of the School of Data Science and Society, will serve as inaugural dean of the new school. Dr. Jeffrey Bardzell, dean of the School of Information and Library Science, will continue leading SILS through the transition while also assuming a newly created secondary appointment as Chief Artificial Intelligence Officer and Vice Provost for AI.

    “Information technologies, especially generative AI, are having a transformational impact,” the letter stated. “This new school is a bold step forward in our commitment to preparing students for a world increasingly shaped by data, information and artificial intelligence.”

    The AI vice provost position, which will become full-time once the new school is operational, will coordinate the university’s response to artificial intelligence across all campus units.

    “Dean Bardzell has been a key voice informing our thinking about AI campuswide,” Roberts and Dean wrote. “We are grateful to have his experience in the classroom, administration and research guiding our efforts.”

    The announcement comes as universities nationwide grapple with integrating AI into curriculum and operations. UNC joins a growing number of institutions restructuring academic units to address what administrators describe as rapid technological change.

    While the decision to merge has been finalized, administrators said that implementation plans remain under development. The university will establish a task force, advisory committee and multiple working groups to determine operational details.

    “Faculty, staff and students will be engaged throughout,” the announcement stated. Both schools will maintain current academic programs during the transition, with administrators expressing hope the merger will support enrollment growth and expanded impact.

    SILS, established in 1931, has approximately 600 students across bachelor’s, master’s and doctoral programs, with strengths in information ethics, library science and human-centered information design.

    SDSS, founded in 2019, has grown to roughly 400 students and focuses on computational methods, statistical analysis and data science applications across disciplines.

    “Both SILS and SDSS bring distinct strengths and areas of excellence to Carolina — technical expertise, humanistic inquiry and a deep understanding of the societal implications of emerging technologies,” administrators wrote.

    The letter noted that the merger is “driven by long-term possibilities” rather than budget constraints, with a focus on growth and expanding both schools’ “powerhouse academic programs.”

    University officials did not provide a timeline for completing the merger or naming the new school. They also did not specify budget details or projected enrollment targets.

    The announcement marks the latest in a series of administrative restructuring efforts at UNC-Chapel Hill, which has seen several organizational changes in recent years as it responds to shifting academic priorities and funding models.

    Source link

  • K-12 districts are fighting ransomware, but IT teams pay the price

    K-12 districts are fighting ransomware, but IT teams pay the price

    Key points:

    The education sector is making measurable progress in defending against ransomware, with fewer ransom payments, dramatically reduced costs, and faster recovery rates, according to the fifth annual Sophos State of Ransomware in Education report from Sophos.

    Still, these gains are accompanied by mounting pressures on IT teams, who report widespread stress, burnout, and career disruptions following attacks–nearly 40 percent of the 441 IT and cybersecurity leaders surveyed reported dealing with anxiety.

    Over the past five years, ransomware has emerged as one of the most pressing threats to education–with attacks becoming a daily occurrence. Primary and secondary institutions are seen by cybercriminals as “soft targets”–often underfunded, understaffed, and holding highly sensitive data. The consequences are severe: disrupted learning, strained budgets, and growing fears over student and staff privacy. Without stronger defenses, schools risk not only losing vital resources but also the trust of the communities they serve.

    Indicators of success against ransomware

    The new study demonstrates that the education sector is getting better at reacting and responding to ransomware, forcing cybercriminals to evolve their approach. Trending data from the study reveals an increase in attacks where adversaries attempt to extort money without encrypting data. Unfortunately, paying the ransom remains part of the solution for about half of all victims. However, the payment values are dropping significantly, and for those who have experienced data encryption in ransomware attacks, 97 percent were able to recover data in some way. The study found several key indicators of success against ransomware in education:

    • Stopping more attacks: When it comes to blocking attacks before files can be encrypted, both K-12 and higher education institutions reported their highest success rate in four years (67 percent and 38 percent of attacks, respectively).
    • Following the money: In the last year, ransom demands fell 73 percent (an average drop of $2.83M), while average payments dropped from $6M to $800K in lower education and from $4M to $463K in higher education.
    • Plummeting cost of recovery: Outside of ransom payments, average recovery costs dropped 77 percent in higher education and 39 percent in K-12 education. Despite this success, K-12 education reported the highest recovery bill across all industries surveyed.

    Gaps still need to be addressed

    While the education sector has made progress in limiting the impact of ransomware, serious gaps remain. In the Sophos study, 64 percent of victims reported missing or ineffective protection solutions; 66 percent cited a lack of people (either expertise or capacity) to stop attacks; and 67 percent admitted to having security gaps. These risks highlight the critical need for schools to focus on prevention, as cybercriminals develop new techniques, including AI-powered attacks.

    Highlights from the study that shed light on the gaps that still need to be addressed include:

    • AI-powered threats: K-12 education institutions reported that 22 percent of ransomware attacks had origins in phishing. With AI enabling more convincing emails, voice scams, and even deepfakes, schools risk becoming test grounds for emerging tactics.
    • High-value data: Higher education institutions, custodians of AI research and large language model datasets, remain a prime target, with exploited vulnerabilities (35 percent) and security gaps the provider was not aware of (45 percent) as leading weaknesses that were exploited by adversaries.
    • Human toll: Every institution with encrypted data reported impacts on IT staff. Over one in four staff members took leave after an attack, nearly 40 percent reported heightened stress, and more than one-third felt guilt they could not prevent the breach.

    “Ransomware attacks in education don’t just disrupt classrooms, they disrupt communities of students, families, and educators,” said Alexandra Rose, director of CTU Threat Research at Sophos. “While it’s encouraging to see schools strengthening their ability to respond, the real priority must be preventing these attacks in the first place. That requires strong planning and close collaboration with trusted partners, especially as adversaries adopt new tactics, including AI-driven threats.”

    Holding on to the gains

    Based on its work protecting thousands of educational institutions, Sophos experts recommend several steps to maintain momentum and prepare for evolving threats:

    • Focus on prevention: The dramatic success of lower education in stopping ransomware attacks before encryption offers a blueprint for broader public sector organizations. Organizations need to couple their detection and response efforts with preventing attacks before they compromise the organization.
    • Secure funding: Explore new avenues such as the U.S. Federal Communications Commission’s E-Rate subsidies to strengthen networks and firewalls, and the UK’s National Cyber Security Centre initiatives, including its free cyber defense service for schools, to boost overall protection. These resources help schools both prevent and withstand attacks.
    • Unify strategies: Educational institutions should adopt coordinated approaches across sprawling IT estates to close visibility gaps and reduce risks before adversaries can exploit them.
    • Relieve staff burden: Ransomware takes a heavy toll on IT teams. Schools can reduce pressure and extend their capabilities by partnering with trusted providers for managed detection and response (MDR) and other around-the-clock expertise.
    • Strengthen response: Even with stronger prevention, schools must be prepared to respond when incidents occur. They can recover more quickly by building robust incident response plans, running simulations to prepare for real-world scenarios, and enhancing readiness with 24/7/365 services like MDR.

    Data for the State of Ransomware in Education 2025 report comes from a vendor-agnostic survey of 441 IT and cybersecurity leaders – 243 from K-12 education and 198 from higher education institutions hit by ransomware in the past year. The organizations surveyed ranged from 100-5,000 employees and across 17 countries. The survey was conducted between January and March 2025, and respondents were asked about their experience of ransomware over the previous 12 months.

    This press release originally appeared online.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • OfS Access and Participation data dashboards, 2025 release

    OfS Access and Participation data dashboards, 2025 release

    The sector level dashboards that cover student characteristics have a provider-level parallel – the access and participation dashboards do not have a regulatory role but are provided as evidence to support institutions develop access and participation plans.

    Though much A&P activity is pre-determined – the current system pretty much insists that universities work with schools locally and address stuff highlighted in the national Equality of Outcomes Risk Register (EORR). It’s a cheeky John Blake way of embedding a national agenda into what are meant to be provider level plans (that, technically, unlock the ability to charge fees up to the higher level) but it could also be argued that provider specific work (particularly on participation measures rather than access) has been underexamined.

    The A&P dashboards are a way to focus attention on what may end up being institutionally bound problems – the kinds of things that providers can fix, and quickly, rather than the socio-economic learning revolution end of things that requires a radicalised cadre of hardened activists to lead and inspire the proletariat, or something.

    We certainly don’t get any detailed mappings between numeric targets declared in individual plans and the data – although my colleague Jim did have a go at that a while ago. Instead this is just the raw information for you to examine, hopefully in an easier to use and speedier fashion than the official version (which requires a user guide, no less)

    Fun with indicators

    There are four dashboards here, covering most of what OfS presents in the mega-board. Most of what I’ve done examines four year aggregations rather than individual years (though there is a timeseries at provider level), I’ve just opted for the 95 per cent confidence interval to show the significance of indicator values, and there’s a few other minor pieces that I’ve not bothered with or set a sensible default on.

    I know that nobody reads this for data dashboard design tips, but for me a series of simpler dashboards are far more useful to the average reader than a single behemoth that can do anything – and the way HESA presents (in the main) very simple tables or plain charts to illustrate variations across the sector represents to me a gold standard for provider level data. OfS is a provider of official statistics, and as such is well aware that section V3.1 of the code of practice requires that:

    Statistics, data and explanatory material should be relevant and presented in a clear, unambiguous way that supports and promotes use by all types of users

    And I don’t think we are quite there yet with what we have, while the simple release of a series of flat tables might get us closer

    If you like it you should have put a confidence interval on it

    To start with, here is a tool for constructing ranked displays of providers against a single metric – here defined as a life cycle stage (access, continuation, completion, attainment, progression) expressed as a percentage of successful achievements for a given subgroup.

    Choose your split indicator type on the top right, and the actual indicator on the top right – select the life cycle stage on the box in the middle, and set mode and level (note certain splits and stages may only be available for certain modes and levels). You can highlight a provider of interest using the box on the bottom right, and also find an overall sector average by searching on “*”. The colours show provider group, and the arrows are upper and lower confidence bounds at the standard 95 per cent level.

    You’ll note that some of the indicators show intersections – with versions of multiple indicators shown together. This allows you to look at, say, white students from a more deprived background. The denominator in the tool tip is the number students in that population, not the number of students where data is available.

    [singles rank]

    I’ve also done a version allowing you to look at all single indicators at a provider level – which might help you to spot particular outliers that may need further analysis. Here, each mark is a split indicator (just the useful ones, I’ve omitted stuff like “POLAR quintiles 1,2,4, and 5” which is really only worth bothering with for gap analysis), you can select provider, mode, and level at the top and highlight a split group (eg “Age (broad)”) or split (eg “Mature aged 21 and over”).

    Note here that access refers to the proportion of all entrants from a given sub-group, so even though I’ve shown it on the same axis for the sake of space it shows a slightly different thing – the other lifecycle stages relate to a success (be that in continuation, progression or whatever) based on how OfS defines “success”.

    [singles provider]

    Oops upside your head

    As you’ve probably spotted from the first section, to really get things out of this data you need to compare splits with other relevant splits. We are talking, then, about gaps – on any of the lifecycle stages – between two groups of students. The classic example is the attainment gap between white and Black students, but you can have all kinds of gaps.

    This first one is across a single provider, and for the four lifecycle stages (this time, we don’t get access) you can select your indicator type and two indicators to get the gap between them (mode, and level, are at the bottom of the screen). When you set your two split, the largest or most common group tends to be on indicator 1 – that’s just the way the data is designed.

    [gaps provider]

    As a quick context you can look for “*” again on the provider name filter to get sector averages, but I’ve also built a sector ranking to help you put your performance in context with similar providers.

    This is like a cross between the single ranking and the provider-level gaps analysis – you just need to set the two splits in the same way.

    [gaps rank]

    Sign o’ the times

    The four year aggregates are handy for most applications, but as you being to drill in you are going to start wondering about individual years – are things getting gradually worse or gradually better? Here I’ve plotted all the individual year data we get – which is, of course, different for each lifecycle stage (because of when data becomes available). This is at a provider level (filter on the top right) and I’ve included confidence intervals at 95 per cent in a lighter colour.

    [gaps provider timeseries]

    Source link