Category: Featured

  • We need new ways to protect academic freedom (opinion)

    We need new ways to protect academic freedom (opinion)

    Katherine Franke, formerly a law professor at Columbia University, is just the latest of many academics who have found themselves in hot water because of something they said outside the classroom. Others have been fired or resigned under pressure for what they posted online or said in other off-campus venues.

    In each of those cases, the “offending party” invoked academic freedom or freedom of speech as a defense to pressures brought on them, or procedures initiated against them, by university administrators. The traditional discourse of academic freedom or free speech on campus has focused on threats from inside the academy of the kind that led Franke and others to leave their positions.

    Today, threats to academic freedom and free speech are being mounted from the outside by governments or advocacy groups intent on policing colleges and universities and exposing what they see as a suffocating orthodoxy. As Darrell M. West wrote in 2022, “In recent years, we have seen a number of cases where political leaders upset about criticism have challenged professors and sought to intimidate them into silence.”

    We have seen this act before, and the record of universities is not pretty.

    During the 1940s and 1950s, an anticommunist crusade swept the nation, and universities were prime targets. In that period, “faculty and staff at institutions of higher learning across the country experienced increased scrutiny from college administrators and trustees, as well as Congress and the FBI, for their speech, their academic work, and their political activities.”

    And many universities put up no resistance.

    Today, some believe, as Nina Jankowicz puts it, that we are entering “an era of real censorship the likes of which the United States has never seen. How will universities respond?”

    If academic freedom and freedom of expression are to be meaningful, colleges and universities must not only resist the temptation to punish or purge people whose speech they and others may find offensive; they must provide new protections against external threats, especially when it comes to extramural speech by members of their faculties.

    They must become active protectors and allies of faculty who are targeted.

    As has long been recognized, academic freedom and free speech are not identical. In 2007, Rachel Levinson, then the AAUP senior counsel, wrote, “It can … be difficult to explain the distinction between ‘academic freedom’ and ‘free speech rights under the First Amendment’—two related but analytically distinct legal concepts.”

    Levinson explained, “Academic freedom … addresses rights within the educational contexts of teaching, learning, and research both in and outside the classroom.” Free speech requires that there be no regulation of expression on “all sorts of topics and in all sorts of settings.”

    Ten years after Levinson, Stanley Fish made a splash when he argued, “Freedom of speech is not an academic value.” As Fish explained, “Accuracy of speech is an academic value … [because of] the goal of academic inquiry: getting a matter of fact right.” Free speech, in contrast, means “something like a Hyde Park corner or a town-hall meeting where people take turns offering their opinions on pressing social matters.”

    But as Keith Whittington observes, the boundaries that Levinson and Fish think can be drawn between academic freedom and free speech are not always recognized, even by organizations like the AAUP. “In its foundational 1915 Declaration of Principles on Academic Freedom and Academic Tenure,” Whittington writes, “the AAUP asserted that academic freedom consists of three elements: freedom of research, freedom of teaching, and ‘freedom of extramural utterance and action.’”

    In 1940, Whittington explains, “the organization reemphasized its position that ‘when they speak or write as citizens,’ professors ‘should be free from institutional censorship or discipline.’”

    Like the AAUP, Whittington opposes “institutional censorship” for extramural speech. That is crucially important.

    But in the era in which academics now live and work, is it enough?

    We know that academics report a decrease in their sense of academic freedom. A fall 2024 survey by Inside Higher Ed found that 49 percent of professors experienced a decline over the prior year in their sense of academic freedom as it pertains to extramural speech.

    To foster academic freedom and free speech on campus or in the world beyond the campus, colleges and universities need to move from merely tolerating the expression of unpopular ideas to a more affirmative stance in which they take responsibility for fostering it. It is not enough to tell faculty that the university will respect academic freedom and free expression if they are afraid to exercise those very rights.

    Faculty may be fearful that saying the “wrong” thing will result in being ostracized or shunned. John Stuart Mill, one of the great advocates for free expression, warned about what he called “the tyranny of the prevailing opinion and feeling.” That tyranny could chill the expression of unpopular ideas.

    In 1952, during the McCarthy era, Supreme Court justice Felix Frankfurter also worried about efforts to intimidate academics that had “an unmistakable tendency to chill that free play of the spirit which all teachers ought especially to cultivate and practice.”

    Beyond the campus, faculty may rightly fear that if they say things that offend powerful people or government officials, they will be quickly caught up in an online frenzy or will be targeted. If they think their academic institutions will not have their back, they may choose the safety of silence over the risk of saying what they think.

    Whittington gets it right when he argues that “Colleges and universities should encourage faculty to bring their expertise to bear on matters of public concern and express their informed judgments to public audiences when doing so might be relevant to ongoing public debates.” The public interest is served when we “design institutions and practices that facilitate the diffusion of that knowledge.”

    Those institutions and practices need to be adapted to the political environment in which we live. That is why it is so important that colleges and universities examine their policies and practices and develop new ways of supporting their faculty if extramural speech gets them in trouble. This may mean providing financial resources as well as making public statements in defense of those faculty members.

    Colleges and universities should also consider making their legal counsel available to offer advice and representation and using whatever political influence they wield on behalf of a faculty member who is under attack.

    Without those things, academics may be “free from” the kind of university action that led Franke to leave Columbia but still not be “free to” use their academic freedom and right of free expression for the benefit of their students, their professions and the society at large.

    Austin Sarat is the William Nelson Cromwell Professor of Jurisprudence and Political Science at Amherst College.

    Source link

  • Making better decisions on student financial support

    Making better decisions on student financial support

    By Peter Gray, Chief Executive and Chair of the JS Group.

    As the higher education sector starts to plan its next budget cycle and many may need to make savings, there is a concern about the impact of any cuts on students and how this could negatively affect their university experience and performance.

    Universities are bound to look at a range of options to save money, especially given the stormy operating context. But one less-often highlighted aspect of university finances is the cost (and benefit) of the additional financial support universities devote to many of their students. Through cash, vouchers and other means, many universities provide financial help to support with the costs of living and learning.

    Using Universities UK’s annual sector figures as one indicator, roughly 5% of universities’ overall expenditure has gone towards financial support and outreach, equivalent to around £2.5 billion. Although some of this money will inevitably not go directly to students themselves, this is still a significant amount of spending.

    There are, naturally, competing tensions when it comes to considering any changes to targeted financial support. With significant financial pressures on students, exacerbated by the cost-of-living crisis, there is always a very justifiable case for more money. However, with the significant financial pressures universities are facing, there is an equally justifiable case to control costs to ensure financial sustainability. Every university has to manage this tension and trade-offs are inevitable when understanding just how much financial support to give and to whom.

    In many respects, the answers to those questions are partially governed by Access & Participation Plans, with the clear intention that these financial interventions really change student outcomes. However, properly measuring those outcomes is incredibly difficult without a much deeper understanding of student ‘need’ – and understanding these needs comes from being able to identify student spending behaviour (and often doing this in real-time).

    It always amazes me that some APPs will state that financial support ‘has had a positive impact on retention’ and some quite the opposite and I think part of this is a result of positioning financial support from the university end of the telescope rather than the student end.

    Understanding real and actual ‘need’ helps to change this. Knowing perhaps that certain groups (for example Asylum Seekers or Gypsy, Roma, Traveller, Showman and Boater students) across the sector will have similar needs would be helpful and data really help here. Having, using, and sharing data will allow us to draw a bigger picture and better signpost to where interventions are most effectively deployed so those particular groups of students who need support are achieving the right outcomes.

    Technology is at hand to help: Open Banking (for example) is an incredible tool that not only can transform how financial support can be delivered but also helps to build an understanding of student behaviour.

    Lifting the bonnet and understanding behaviour poses additional questions, such as: When is the right time to give that support? And what form should that support take?

    I am a big proponent of providing financial support as soon as a student starts. When I talk to universities, however, it is clear that the data needed to identify particular groups of students are not readily available at the point of entry and students’ needs are not met. Giving a student financial support in December, when they needed it in September, is not delivering at the point of student need, it is delivering at the point where the university can identify the student. I think there is a growing body of evidence that suggests the large drop off in students between September and December is, in part, because of this.

    Some universities in the sector give a small amount of support to all students at the start of the year, knowing that by doing so they will ensure that they can meet the immediate needs of some students. But clearly, some money must also go to those who do not necessarily need it.

    However, and this is where the maths comes in, if the impact of that investment keeps more students in need at university, then I would argue that investment is worth the return. And the maths is simple: it really doesn’t take many additional students to stay to have a profoundly positive impact on university finances. Thus it is certainly worthy of consideration.

    To me, this is about using financial support to drive the ultimate goal of improving student outcomes, especially the retention of students between September and December, which is when the first return is made, where the largest withdrawal is seen and where the least amount of financial support is given.

    As to the nature or format of support: of course, in most cases, it is easier to provide cash. However, again, this is about your investment in your student, and, for example, if you have students on a course with higher material and resource costs, or students who are commuting, then there is an argument to consider more in-kind support and using data to support that decision.

    Again, I am a proponent of not just saying ‘one size fits all’. Understanding student need is complex, but solutions are out there. It is important to work together to identify patterns of real student need and understand the benefits of doing so.

    My knowledge draws on JS Group’s data, based on the direct use of £40 million of specialist student financial support to more than 160,000 students across 30 UK universities in the last full academic cycle.

    I have also looked at the student views on such funding and there is an emerging picture that connects student financial support with continuation, participation and progress. A summary of student feedback is here: https://jsgroup.co.uk/news-and-views/news/student-feedback-report-january-2025/

    The real positive of this is that everyone wants the same goal: for fewer students to withdraw from their courses and for those students to thrive at university and be successful. We need to widen the debate on how financial support is delivered, when, and in what format to draw together a better collective understanding of student need and behaviour to achieve that goal.

    Source link

  • A dismal report card in math and reading

    A dismal report card in math and reading

    The kids are not bouncing back. 

    The results of a major national test released Wednesday showed that in 2024, reading and math skills of fourth and eighth grade students were still significantly below those of students in 2019, the last administration of the test before the pandemic. In reading, students slid below the devastatingly low achievement levels of 2022, which many educators had hoped would be a nadir. 

    The test, the National Assessment of Educational Progress (NAEP), is often called the nation’s report card. Administered by the federal government, it tracks student performance in fourth and eighth grades and serves as a national yardstick of achievement. Scores for the nation’s lowest-performing students were worse in both reading and math than those of students two years ago. The only bright spot was progress by higher-achieving children in math. 

    The NAEP report offers no explanation for why students are faltering, and the results were especially disappointing after the federal government gave schools $190 billion to aid in pandemic recovery. 

    “These 2024 results clearly show that students are not where they need to be or where we want them to be,” said Peggy Carr, the commissioner of the National Center for Education Statistics, in a briefing with journalists. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    More than 450,000 fourth and eighth graders, selected to be representative of the U.S. population, took the biennial reading and math tests between January and March of 2024. 

    Depressed student achievement was pervasive across the country, regardless of state policies or instructional mandates. Student performance in every state remained below what it was in 2019 on at least one of the four reading or math tests. In addition to state and national results, the NAEP report also lists the academic performance for 26 large cities that volunteer for extra testing.

    An ever-widening gap

    The results also highlighted the sharp divergence between higher- and lower- achieving students. The modest progress in fourth grade math was entirely driven by high-achieving students. And the deterioration in both fourth and eighth grade reading was driven by declines among low-achieving students. 

    “Certainly the most striking thing in the results is the increase in inequality,” said Martin West, a professor of education at Harvard University and vice chair of the National Assessment Governing Board, which oversees the NAEP test. “That’s a big deal. It’s something that we hadn’t paid a lot of attention to traditionally.”

    The starkest example of growing inequality is in eighth grade math, where the achievement gap grew to the largest in the history of the test.

    Source: NAEP 2024

    The chart above shows that the math scores of all eighth graders fell between 2019 and 2022. Afterward, high-achieving students in the top 10 percent and 25 percent of the nation (labeled as the 90th and 75th percentiles above) began to improve, recovering about a quarter of the setbacks for high achievers during the pandemic. That’s still far behind high-performing eighth graders in 2019, but at least it’s a positive trend. 

    The more disturbing result is the continuing deterioration of scores by low-performing students in the bottom 10 percent and 25 percent. The huge pandemic learning losses for students in the bottom 10 percent grew 70 percent larger between 2022 and 2024. Learning losses for students in the bottom 25 percent grew 25 percent larger.

    “The rich get richer and the poor are getting shafted,” said Scott Marion, who serves on the NAEP’s governing board and is the executive director of the National Center for the Improvement of Educational Assessment, a nonprofit consultancy. “It’s almost criminal.”

    More than two-thirds of students in the bottom 25 percent are economically disadvantaged. A quarter of these low performers are white and another quarter are Black. More than 40 percent are Hispanic. A third of these students have a disability and a quarter are classified as English learners. 

    By contrast, fewer than a quarter of the students in the top 25 percent are economically disadvantaged. They are disproportionately white (61 percent) and Asian American (14 percent). Only 5 percent are Black and 15 percent are Hispanic. Three percent or fewer of students at the top have a disability or are classified as English learners.  

    Related: Six puzzling questions from the disastrous [2022] NAEP results

    Although average math scores among all eighth grade students were unchanged between 2022 and 2024, that average masks the improvements at the top and the deterioration at the bottom. They offset each other. 

    The NAEP test does not track individual students. The eighth graders who took the exam in 2024 were a different group of students than the eighth graders who took the exam in 2022 and who are now older. Individual students have certainly learned new skills since 2019. When NAEP scores drop, it’s not that students have regressed and cannot do things they used to be able to do. It means that they’re learning less each year. Kids today aren’t able to read or solve math problems as well as kids their same age in the past.

    Students who were in eighth grade in early 2024, when this exam was administered, were in fourth grade when the pandemic first shuttered schools in March 2020. Their fifth grade year, when students should have learned how to add fractions and round decimals, was profoundly disrupted. School days began returning to normal during their sixth and seventh grade years. 

    Harvard’s West explained that it was incorrect to assume that children could bounce back academically. That would require students to learn more in a year than they historically have, even during the best of times.

    “There’s nothing in the science of learning and development that would lead us to expect students to learn at a faster rate after they’ve experienced disruption and setbacks,” West said. “Absent a massive effort society-wide to address the challenge, and I just haven’t seen an effort on the scale that I think would be needed, we shouldn’t expect more positive results.”

    Learning loss is like a retirement savings shortfall

    Learning isn’t like physical exercise, West said. When our conditioning deteriorates after an injury, the first workouts might be a grind but we can get back to our pre-injury fitness level relatively quickly. 

    “The better metaphor is saving for retirement,” said West. “If you miss a deposit into your account because of a short-term emergency, you have to find a way to make up that shortfall, and you have to make it up with interest.”

    What we may be seeing now are the enduring consequences of gaps in basic skills. As the gaps accumulate, it becomes harder and harder for students to keep up with grade-level content. 

    Another factor weighing down student achievement is rampant absenteeism. In survey questions that accompany the test, students reported attending school slightly more often than they had in 2022, but still far below their 2019 attendance rates. Eleven percent of eighth graders said that they had missed five or more school days in the past month, down from 16 percent in 2022, but still far more than the 7 percent of students who missed that much school in 2019. 

    “We also see that lower-performing readers aren’t coming to school,” said NCES Commissioner Carr. “There’s a strong relationship between absenteeism and performance in these data that we’re looking at today.”

    Eighth graders by the number of days they said they were absent from school in the previous month 

    Source: NAEP 2024

    Fourth grade math results were more hopeful. Top-performing children fully recovered back to 2019 achievement levels and can do math about as well as their previous peers. However, lower-performing children in the bottom 10 percent and 25 percent did not rebound at all. Their scores were unchanged between 2022 and 2024. These students were in kindergarten when the pandemic first hit in 2020 and missed basic instruction in counting and arithmetic.

    Reading scores showed a similar divergence between high- and low- achievers.

    Source: NAEP 2024

    This chart above shows that the highest-performing eighth graders failed to catch up to what high-achieving eighth graders used to be able to do on reading comprehension tests. But it’s not a giant difference. What’s startling is the steep decline in reading scores for low-achieving students. The pandemic drops have now doubled in size. Reading comprehension is much, much worse for many middle schoolers. 

    It’s difficult to say how much of this deterioration is pandemic related. Reading comprehension scores for middle schoolers had been declining for a decade since 2013. Separate surveys show that students are reading less for pleasure, and many educators speculate that cellphone use has replaced reading time.

    Related: Why reading comprehension is deteriorating

    The biggest surprise was fourth grade reading. Over the past decade, a majority of states have passed new “science of reading” laws or implemented policies that emphasize phonics in classrooms. There have been reports of improved reading performance in Mississippi, Florida, Tennessee and elsewhere. But scores for most fourth graders, from the highest to the lowest achievers, have deteriorated since 2022. 

    One possibility, said Harvard’s West, is that it’s “premature” to see the benefits of improved instruction, which could take years.  Another possibility, according to assessment expert Marion, is that being able to read words is important, but it’s not enough to do well on the NAEP, which is a test of comprehension. More elementary school students may be better at decoding words, but they have to make sense of those words to do well on the NAEP. 

    Carr cited the example of Louisiana as proof that it is possible to turn things around. The state exceeded its 2019 achievement levels in fourth grade reading. “They did focus heavily on the science of reading but they didn’t start yesterday,” said Carr. “I wouldn’t say that hope is lost.”

    More students fall below the lowest “basic” level 

    The results show that many more children lack even the most basic skills. In math, 24 percent of fourth graders and 39 percent of eighth graders cannot reach the lowest of three achievement levels, called “basic.” (The others are “proficient” and “advanced.”) These are fourth graders who cannot locate whole numbers on a number line or eighth graders who cannot understand scientific notation. 

    The share of students reading below basic was the highest it’s ever been for eighth graders, and the highest in 20 years for fourth graders. Forty percent of fourth graders cannot put events from a story into sequential order, and one third of eighth graders cannot determine the meaning of a word in the context of a reading passage. 

    “To me, this is the most pressing challenge facing American education,” said West.

    Contact staff writer Jill Barshay at 212-678-3595 or barshay@hechingerreport.org.

    This story about the 2024 NAEP test was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for  Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Five-Minute Starts: Fifteen Ideas to Ignite Your Class – Faculty Focus

    Five-Minute Starts: Fifteen Ideas to Ignite Your Class – Faculty Focus

    Source link

  • HEDx Podcast: Professor Genevieve Bell on AI – Episode 151

    HEDx Podcast: Professor Genevieve Bell on AI – Episode 151

    Professor Genevieve Bell is vice-chancellor and president of the Australian National University.

    In this episode, she reflects on her journey as a scientist, engineer and humanist in the United States and Australia. The professor shares lessons learned in Silicon Valley and leading Australia’s national university.

    Professor Bell also identifies short term challenges and the long term trajectory of higher education, specifically in relation to technology and AI.

    Do you have an idea for a story?
    Email rebecca.cox@news.com.au

    Source link

  • Monash underpays $7.6m as ‘expert council’ on uni governance members announced

    Monash underpays $7.6m as ‘expert council’ on uni governance members announced

    CEDA CEO Melinda Cilento interviewing Prime Minister Anthony Albanese in August last year. Picture: Irene Dowdy

    The members who will sit on the council overseeing university governance and advising government on “universities being good employers” have been announced.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Ministerial Direction 111: What you need to know

    Ministerial Direction 111: What you need to know

    Jason Clare implemented the direction after his Bill was downvoted by the Coalition and Greens. Picture: Brett Hartwig

    Ministerial Direction 111 (MD111) is the new way of processing international student visa applications and has replaced Ministerial Direction 107. It came into effect on December 19, 2024.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • HESA Spring 2025: staff | Wonkhe

    HESA Spring 2025: staff | Wonkhe

    HESA Spring 2025 kicks off in earnest with a full release of the staff data for 2023-24.

    Unlike in previous years, there’s been no early release of the headlines – the statistics release (which provides an overview at sector level) and the full data release (which offers detail at provider level) have both turned up on the same day.

    Staff data has, in previous years, generally been less volatile than student data. Whereas recruitment can and does lurch alarmingly around based on strategic priorities, government vacillation about student visas, and the vagaries of the student market – staff employment tends to be something with a merciful degree of permanency. Even if it isn’t the same staff working under the same terms and conditions, it does tend to need broadly the same number of people.

    With the increasing financial pressures felt by universities you would expect 2023-24 to be a deviation from this norm.

    Starters and leavers

    We’ll start by looking at the numbers of starters and leavers from each provider. This chart shows the change in academic staff numbers year on year between your chosen year and the year before (as the thick bars) and the total number of full and part time staff in the year of your choice (as the thin bars). Over on the other side of the visualisation under the controls you can see total staff numbers, broken down into full and part time as a time series – mouse over a provider on the main chart to change the provider focus here. You can filter by year, and (for the main chart) mode of employment.

    [Full screen]

    What’s apparent is that across quite a lot of the sector academic staff numbers didn’t change that much. There were some outliers at both end – Coventry University had 585 less academic staff in 2023-24 than 2022-23, while Cardiff University has 565 more (yes, the same Cardiff University that confirmed plans for 400 full time redundancies yesterday).

    If you’ve been following sector news this may surprise you – last year saw many providers announce voluntary or compulsory redundancies. The Queen Mary University of London UCU branch has been tracking these announcements over time.

    Schemes like this take time for a university to run – there is a mandatory consultation period, followed (hopefully) by some finessing of the scheme and then negotiations with individual staff members. It is not a way to make a quick, in year, saving. Oftentimes the original announcement is of a far higher number of staff redundancies than actually end up happening.

    Subject level

    If you work in a university or other higher education provider, you’ll know that stuff like this very often happens across particular departments and faculties rather than the whole university. I can’t offer you faculty level from public data, but there is data available by cost centre.

    [Full screen]

    Cost centres are usually used in financial data, and do not cleanly map to visible structures within universities. Here you can select a provider and choose between cost centre groups and cost centres as two levels of detail. I’ve added an option to select contract type – in the main I suggest you leave this as academic (excluding atypical).

    Zero hours

    I’m sure I say this every year, but not all providers return data for non-academic staff (in England they are not required to), and an “atypical” contract usually refers to a very short period of work (a single guest lecture or suchlike). There is a pervasive myth that these are “zero hours” contracts – even though HESA publishes data on these separately:

    Here’s a chart showing the terms of employment and pay arrangements related to zero hours contracts for 2023-24. You can see the majority of these are academic in nature, with a roughly even split between fixed term and open-ended terms. The majority (around 4,075) are paid by the hour.

    [Full screen]

    This represents a small year-on-year growth in the use of this kind of contract – in 2022-23, there were 3,915 academic staff on a zero hour contract

    Subject, age, and pay

    I often wonder about the conditions of academic staff across subject areas, and how this pertains to the age of the academics involved and how much they are paid. This visualisation allows use to view age against salary (relating to groups of spine points on the standard New JNCHES pay scale used in most larger providers).

    [Full screen]

    As you’d expect, overall there is a positive correlation between age and salary – if you are an older academic you are likely to be paid more. This is particularly pronounced in design, creative, and performing arts: where staff are likely to be older and better paid on average. Compare the physical sciences, where more staff are younger and spine points are lower.

    This chart allows you to select a cost centre (either a group or individual cost centre), and filter by academic employment function (teaching, research, both…) and contract level (senior academics and professors, others…). There’s a range of years on offer as well.

    Ethnicity

    The main news stories that tend to come out of this release relate to academic staff characteristics, and specifically the low number of Black professors. There is some positive movement on that front this year, though the sector at that level is in no way representative of staff as a whole, the student body, or wider society.

    [Full screen]

    Source link

  • U of Idaho President Seems To Temper His Cheerleading for U of Phoenix Purchase (David Halperin)

    U of Idaho President Seems To Temper His Cheerleading for U of Phoenix Purchase (David Halperin)

    In testimony Monday before a joint committee of the Idaho
    legislature, University of Idaho president C. Scott Green seemed a
    little less committed to the deal he has relentlessly touted for more
    than a year and a half — for his school to buy, for $685 million, the
    huge for-profit University of Phoenix from private equity giant Apollo
    Global Management.

    According to Idaho Education News, Green said the next move was Apollo’s. “We’re waiting to hear what they would like to do,” Green said.

    Green’s plan has been thwarted again and again, with negative votes in the Idaho legislature, a successful court challenge by the state’s attorney general, criticism from the state treasurer, and sharp scrutiny from news outlets in the state.

    The Green school deal has assumed that operation of Phoenix would
    bring millions in new revenue to fund his university. But it ignores
    that running a for-profit college, one that has repeatedly gotten in trouble with law enforcement,
    would be a tremendous challenge: If Green pushed to end Phoenix’s
    predatory practices and improve student outcomes, it probably would
    start losing money, because predatory practices, coupled with high
    prices and low spending on education, have made up the school’s secret
    sauce. But if Green allowed the deceptive conduct to persist, the school
    could face more legal peril. And, whatever route he took, Green’s
    school might end up assuming massive liability for student loan debt the
    government has cancelled based on past abuses at Phoenix.

    At its peak, Phoenix was the largest for-profit college in the
    country and got upwards of $2 billion a year in federal student aid,
    while boasting dismal graduation rates and high levels of loan defaults.

    Last summer, the University of Idaho and Apollo agreed to a one-year extension of their purchase deal. That arrangement expires June 10. Meanwhile Apollo has the right to talk with other potential buyers.

    Apollo already has sent Idaho $5 million to cover the school’s
    high-priced legal and consulting fees in connection with the deal, and
    it has agreed to pay up to $20 million to Idaho if the deal falls
    through.

    Green told the legislature that $20 million would cover his school’s
    costs with perhaps $2 to $3 to spare. “I think we’re well-protected,” he
    boasted.

    Kind of. Green, whose background is in corporate management and
    finance, could potentially walk away without losing money for the
    school. But he has tied up state university, executive, legislative, and
    judicial resources for many hundreds of hours jousting over an effort
    that would keep alive a predatory school that has buried thousands of
    graduates in debt they can’t afford to repay, while wasting billions in
    federal taxpayer dollars, when that time could have been focused on the
    real challenges of state higher education.

    If Idaho can’t work out a deal, Apollo may run out of options to dump
    the school, and this taxpayer-funded multi-billion dollar disgrace may
    at last be put down.

    [Editor’s note: This article originally appeared on Republic Report.] 

    Source link

  • Data futures, reviewed | Wonkhe

    Data futures, reviewed | Wonkhe

    As a sector, we should really have a handle on how many students we have and what they are like.

    Data Futures – the multi-year programme that was designed to modernise the collection of student data – has become, among higher education data professionals, a byword for delays, stress, and mixed messages.

    It was designed to deliver in year data (so 2024-25 data arriving within the 2024-25 academic year) three times a year, drive efficiency in data collection (by allowing for process streamlining and automation), and remove “data duplication” (becoming a single collection that could be used for multiple purposes by statutory customers and others). To date it has achieved none of these benefits, and has instead (for 2022-23 data) driven one of the sectors’ most fundamental pieces of data infrastructure into such chaos that all forward uses of data require heavy caveats.

    The problem with the future

    In short – after seven years of work (at the point the review was first mooted), and substantial investment, we are left with more problems than we started with. Most commentary has focused on four key difficulties:

    • The development of the data collection platform, starting with Civica in 2016 and later taken over by Jisc, has been fraught with difficulties, frequently delayed, and experienced numerous changes in scope
    • The documentation and user experience of the data collection platform has been lacking. Rapid changes have not resulted in updates for those who use the platform within providers, or those who support those providers (the HESA Liaison team). The error handling and automated quality rules have caused particular issues – indeed the current iteration of the platform still struggles with fields that require responses involving decimal fractions.
    • The behavior of some statutory customers – in frequently modifying requirements, changing deadlines, and putting unhelpful regulatory pressure on providers, has not helped matters.
    • The preparedness of the sector has been inconsistent between providers and between software vendors. This level of preparedness has not been fully understood – in part because of a nervousness among providers around regulatory consequences for late submissions.

    These four interlinked strands have been exacerbated by an underlying fifth issue:

    • The quality of programme management, programme delivery, and programme documentation has not been of the standards required for a major infrastructure project. Parts of this have been due to problems in staffing, and problems in programme governance – but there are also reasonable questions to be asked about the underlying programme management process.

    Decisions to be made

    An independent review was originally announced in November 2023, overlapping a parallel internal Jisc investigation. The results we have may not be timely – the review didn’t even appear to start until early 2024 – but even the final report merely represents a starting point for some of the fundamental discussions that need to happen about sector data.

    I say a “starting point” because many of the issues raised by the review concern decisions about the projected benefits of doing data futures. As none of the original benefits of the programme have been realised in any meaningful way, the future of the programme (if it has one) needs to be focused on what people actually want to see happen.

    The headline is in-year data collection. To the external observer, it is embarrassing that while other parts of the education sector can return data on a near-real time basis – universities update the records they hold on students on a regular basis so it should not be impossible to update external data too. It should not come as a surprise that when the review poses the question:

    As a priority, following completion of the 2023-24 data collection, the Statutory Customers (with the help of Jisc) should revisit the initial statement of benefits… in order to ascertain whether a move to in-year data collection is a critical dependent in order to deliver on the benefits of the data futures programme.

    This isn’t just an opportunity for regulators to consider their shopping list – a decision to continue needs to be swiftly followed by a cost-benefit analysis, reassessing the value of in-year collection and determining whether or when to pursue in-year collection. And the decision is that there will, one day, be in-year student data. In a joint statement the four statutory customers said:

    After careful consideration, we intend to take forward the collection of in-year student data

    highlighting the need for data to contribute to “robust and timely regulation”, and reminding institutions that they will need “adequate systems in place to record and submit student data on time”.

    The bit that interests me here is the implications for programme management.

    Managing successful programmes

    If you look at the government’s recent record in delivering large and complex programmes you may be surprised to learn of the existence of a Government Functional Standard covering portfolio, programme, and project management. What’s a programme? Well:

    A programme is a unique, temporary, flexible organisation created to co-ordinate, direct and oversee the implementation of a set of projects and other related work components to deliver outcomes and benefits related to a set of strategic objectives

    Language like this, and the concepts underpinning it come from what remains the gold standard programme management methodology, Managing Successful Programmes (MSP). If you are more familiar with the world of project management (project: “a unique temporary management environment, undertaken in stages, created for the purpose of delivering one or more business products or outcomes”) it bears a familial resemblance to PRINCE2.

    If you do manage projects for a living, you might be wondering where I have been for the last decade or so. The cool kids these days are into a suite of methodologies that come under the general description of “agile” – PRINCE2 these days is seen primarily as a cautionary tale: a “waterfall” (top down, documentation centered, deadline focused) management practice rather than an “iterative” (emergent, development centered, short term) one.

    Each approach has strengths and weaknesses. Waterfall methods are great if you want to develop something that meets a clearly defined need against clear milestones and a well understood specification. Agile methods are a nice way to avoid writing reports and updating documentation.

    Data futures as a case study

    In the real world, the distinction is less clear cut. Most large programmes in the public sector use elements of waterfall methods (regular project reports, milestones, risk and benefits management, senior responsible owners, formal governance) as a scaffold in which sit agile elements at a more junior level (short development cycle, regular “releases” of “product” prioritised above documentation). While this can be done well it is very easy for the two ideologically separate approaches to drift apart – and it doesn’t take much to read this into what the independent review of data futures reveals.

    Recommendation B1 calls, essentially, for clarity:

    • Clarity of roles and responsibilities
    • Clarity of purpose for the programme
    • Clarity on the timetable, and on how and when the scope of the programme can be changed

    This is amplified by recommendation C1, which looks for specific clarifications around “benefits realisation” – which itself underpins the central recommendation relating to in-year data.

    In classic programme management (like MSP) the business case will include a map of programme benefits: that is, all of the good things that will come about as a result of the hard work of the programme. Like the business case’s risk register (a list of all the bad things that might happen and what can be done if they did) it is supposed to be regularly updated and signed off by the Programme Board – which is made up of the most senior staff responsible for the work of the programme (the Senior Responsible Owners) in the lingo.

    The statement of benefits languished for some time without a full update (there was an incomplete attempt in February 2023, and a promise to make another one after the completed 2022-23 collection – we are not told whether the second had happened). In proper, grown-up, programme management this is supposed to be done in a systematic way: every programme board meeting you review the benefits and the risk register. It’s dull (most of the time!) but it is important. The board needs an eye on whether the programme still offers value overall (based on an analysis of projected benefits). And if the scope needed to change, the board would have final say on that.

    The issue with Data Futures was clarity over whether this level of governance actually had the power to do these things, and – if not – who was actually doing them. The Office for Students latterly put together quite a complex and unwieldy governance structure, with a quarterly review board having oversight of the main programme board. This QRB was made up of very senior staff at the statutory customers (OfS, HEFCW, SFC, DoE(NI)), Jisc, and HESA (plus one Margaret Monckton – now chair of this independent review! – as an external voice).

    The QRB oversaw the work of the programme board – meaning that decisions made by the senior staff nominally responsible for the direction of the programme were often second guessed by their direct line managers. The programme board was supposed to have its own assurance function and an independent observer – it did not (despite the budget being there for it).

    Stop and go

    Another role of the board is to make what are more generally called “stop-go” decisions, and are here described as “approval to proceed”. This is an important way of making sure the programme is still on track – you’d set (in advance) the criteria that needed to be fulfilled in terms of delivery (was the platform ready, had the testing been done) before you moved on to the next work package. Below this, incremental approvals are made by line managers or senior staff as required, but reported upwards to the board.

    What seems to have happened a lot in the Data Futures programme is what’s called conditional approvals – where some of these conditions were waived based on assurances that the remaining required work was completed. This is fine as it goes (not everything lines up all the time) but as the report notes:

    While the conditions of the approvals were tracked in subsequent increment approval documents, they were not given a deadline, assignee or accountable owner for the conditions. Furthermore, there were cases where conditions were not met by the time of the subsequent approval

    Why would you do that? Well, you’d be tempted if you had another board above you – comprising very senior staff and key statutory customers – concerned about the very public problems with Data Futures and looking for progress. The Quarterly Review Board (QRB) as it turned out, only actually ended up making five decisions (and in three of these cases it just punted the issue back down to the programme board – the other two, for completists, were to delay plans for in-year collection).

    What it was meant to be doing was “providing assurance on progress”, “acting as an escalation point” and “approving external assurance activities”. As we’ve already seen, it didn’t really bother with external assurance. And on the other points the review is damning:

    From the minutes provided, the extent to which the members of the QRG actively challenged the programme’s progress and performance in the forum appears to be limited. There was not a clear delegation of responsibilities between the QRG, Programme Board and other stakeholders. In practice, there was a lack of clarity also on the role of the Data Futures governance structure and the role of the Statutory Customers separately to the Data Futures governance structure; some decisions around the data specification were taken outside of the governance structure.

    Little wonder that the section concludes:

    Overall, the Programme Board and QRG were unable to gain an independent, unbiased view on the progress and success of the project. If independent project assurance had been in place throughout the Data Futures project, this would have supported members of the Programme Board in oversight of progress and issues may have been raised and resolved sooner

    Resourcing issues

    Jisc, as developer, took on responsibility for technical delivery in late 2019. Incredibly, Jisc was not provided with funding to do this work until March 2020.

    As luck would have it, March 2020 saw the onset of a series of lockdowns and a huge upswing in demand for the kind of technical and data skills needed to deliver a programme like data futures. Jisc struggled to fill key posts, most notably running for a substantive period of time without a testing lead in post.

    If you think back to the 2022-23 collection, the accepted explanation around the sector for what – at heart – had gone wrong was a failure to test “edge cases”. Students, it turns out, are complex and unpredictable things – with combinations of characteristics and registrations that you might not expect to find. A properly managed programme of testing would have focused on these edge cases – there would have been less issues faced when the collection went live.

    Underresourcing and understaffing are problems in their own right, but these were exacerbated by rapidly changing data model requirements, largely coming from statutory customers.

    To quote the detail from from the report:

    The expected model for data collection under the Data Futures Programme has changed repeatedly and extensively, with ongoing changes over several years on the detail of the data model as well as the nature of collection and the planned number of in-year collections. Prior to 2020, these changes were driven by challenges with the initial implementation. The initial data model developed was changed substantially due to technical challenges after a number of institutions had expended significant time and resource working to develop and implement it. Since 2020, these changes were made to reflect evolving requirements of the return from Statutory Customers, ongoing enhancements to the data model and data specification and significantly, the ongoing development of quality rules and necessary technical changes determined as a result of bugs identified after the return had ‘gone live’. These changes have caused substantial challenges to delivery of the Data Futures Programme – specifically reducing sector confidence and engagement as well as resulting in a compressed timeline for software development.

    Sector readiness

    It’s not enough to conjure up a new data specification and platform – it is hugely important to be sure that your key people (“operational contacts”) within the universities and colleges that would be submitting data are ready.

    On a high level, this did happen – there were numerous surveys of provider readiness, and the programme also worked with the small number of software vendors that supply student information systems to the sector. This formal programme communication came alongside the more established links between the sector and the HESA Liaison team.

    However, such was the level of mistrust between universities and the Office for Students (who could technically have found struggling providers in breach of condition of registration F4), that it is widely understood that answers to these surveys were less than honest. As the report says:

    Institutions did not feel like they could answer the surveys honestly, especially in instances where the institution was not on track to submit data in line with the reporting requirements, due to the outputs of the surveys being accessible to regulators/funders and concerns about additional regulatory burden as a result.

    The decision to scrap a planned mandatory trial of the platform, made in March 2022 by the Quarterly Review Group, was ostensibly made to reduce burden – but, coupled with the unreliable survey responses, this meant that HESA was unable to identify cases where support was needed.

    This is precisely the kind of risk that should have been escalated to programme board level – a lack of transparency between Jisc and the board about readiness made it harder to take strategic actions on the basis of evidence about where the sector really was. And the issue continued into live collection – because Liaison were not made aware of common problems (“known issues”, in fact) the team often struggled with out-of-date documentation: meaning that providers got conflicting messages from different parts of Jisc.

    Liaison, on their part, dealt with more than 39,000 messages between October and December 2023 (during the peak of issues raised during the collection process) – even given the problems noted above they resolved 61 per cent of queries on the first try. Given the level of stress in the sector (queries came in at all hours of the day) and the longstanding and special relationship that data professionals have with HESA Liasion, you could hardly criticise that team for making the best of a near-impossible situation.

    I am glad to see that the review notes:

    The need for additional staff, late working hours, and the pressure of user acceptance testing highlights the hidden costs and stress associated with the programme, both at institutions and at Jisc. Several institutions talked about teams not being able to take holidays over the summer period due to the volume of work to be delivered. Many of the institutions we spoke to indicated that members of their team had chosen to move into other roles at the institution, leave the sector altogether, experienced long term sickness absence or retired early as a result of their experiences, and whilst difficult to quantify, this will have a long-term impact on the sector’s capabilities in this complex and fairly niche area.

    Anyone who was even tangentially involved in the 2022-23 collection, or attended the “Data Futures Redux” session at the Festival of Higher Education last year, will find those words familiar.

    Moving forward

    The decision on in-year data has been made – it will not happen before the 2026-27 academic year, but it will happen. The programme delivery and governance will need to improve, and there are numerous detailed recommendations to that end: we should expect more detail and the timeline to follow.

    It does look as though there will be more changes to the data model to come – though the recommendation is that this should be frozen 18 months before the start of data collection which by my reckoning would mean a confirmed data model printed out and on the walls of SROC members in the spring of 2026. A subset of institutions would make an early in-year submission, which may not be published to “allow for lower than ideal data quality”.

    On arrangements for collections for 2024-25 and 2025-26 there are no firm recommendations – it is hoped that data model changes will be minimal and the time used to ensure that the sector and Jisc are genuinely ready for the advent of the data future.

    Source link