Tag: research

  • DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74

    DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    When teens and young adults with disabilities in California’s Poway Unified School District heard about a new opportunity to get extra help planning for life after high school, nearly every eligible student signed up.

    The program, known as Charting My Path for Future Success, aimed to fill a major gap in education research about what kinds of support give students nearing graduation the best shot at living independently, finding work, or continuing their studies.

    Students with disabilities finish college at much lower rates than their non-disabled peers, and often struggle to tap into state employment programs for adults with disabilities, said Stacey McCrath-Smith, a director of special education at Poway Unified, which had 135 students participating in the program. So the extra help, which included learning how to track goals on a tool designed for high schoolers with disabilities, was much needed.

    Charting My Path launched earlier this school year in Poway Unified and 12 other school districts. The salaries of 61 school staff nationwide, and the training they received to work with nearly 1,100 high schoolers with disabilities for a year and a half, was paid for by the U.S. Department of Education.

    Jessie Damroth’s 17-year-old son Logan, who has autism, attention deficit hyperactivity disorder, and other medical needs, had attended classes and met with his mentor through the program at Newton Public Schools in Massachusetts for a month. For the first time, he was talking excitedly about career options in science and what he might study at college.

    “He was starting to talk about what his path would look like,” Damroth said. “It was exciting to hear him get really excited about these opportunities. … He needed that extra support to really reinforce that he could do this.”

    Then the Trump administration pulled the plug.

    Charting My Path was among more than 200 Education Department contracts and grants terminated over the last two weeks by the Trump administration’s U.S. DOGE Service. DOGE has slashed spending it deemed to be wasteful, fraudulent, or in service of diversity, equity, inclusion, and accessibility goals that President Donald Trump has sought to ban. But in several instances, the decision to cancel contracts affected more than researchers analyzing data in their offices — it affected students.

    Many projects, like Charting My Path, involved training teachers in new methods, testing learning materials in actual classrooms, and helping school systems use data more effectively.

    “Students were going to learn really how to set goals and track progress themselves, rather than having it be done for them,” McCrath-Smith said. “That is the skill that they will need post-high school when there’s not a teacher around.”

    All of that work was abruptly halted — in some cases with nearly finished results that now cannot be distributed.

    Every administration is entitled to set its own priorities, and contracts can be canceled or changed, said Steven Fleischman, an education consultant who for many years ran one of the regional research programs that was terminated. He compared it to a homeowner deciding they no longer want a deck as part of their remodel.

    But the current approach reminds him more of construction projects started and then abandoned during the Great Recession, in some cases leaving giant holes that sat for years.

    “You can walk around and say, ‘Oh, that was a building we never finished because the funds got cut off,’” he said.

    DOGE drives cuts to education research contracts, grants

    The Education Department has been a prime target of DOGE, the chaotic cost-cutting initiative led by billionaire Elon Musk, now a senior adviser to Trump.

    So far, DOGE has halted 89 education projects, many of which were under the purview of the Institute of Education Sciences, the ostensibly independent research arm of the Education Department. The administration said those cuts, which included multi-year contracts, totaled $881 million. In recent years, the federal government has spent just over $800 million on the entire IES budget.

    DOGE has also shut down 10 regional labs that conduct research for states and local schools and shuttered four equity assistance centers that help with teacher training. The Trump administration also cut off funding for nearly 100 teacher training grants and 18 grants for centers that often work to improve instruction for struggling students.

    The total savings is up for debate. The Trump administration said the terminated Education Department contracts and grants were worth $2 billion. But some were near completion with most of the money already spent.

    An NPR analysis of all of DOGE’s reported savings found that it likely was around $2 billion for the entire federal government — though the Education Department is a top contributor.

    On Friday, a federal judge issued an injunction that temporarily blocks the Trump administration from canceling additional contracts and grants that might violate the anti-DEIA executive order. It’s not clear whether the injunction would prevent more contracts from being canceled “for convenience.”

    Mark Schneider, the recent past IES director, said the sweeping cuts represent an opportunity to overhaul a bloated education research establishment. But even many conservative critics have expressed alarm at how wide-ranging and indiscriminate the cuts have been. Congress mandated many of the terminated programs, which also indirectly support state and privately funded research.

    The canceled projects include contracts that support maintenance of the Common Core of Data, a major database used by policymakers, researchers, and journalists, as well as work that supports updates to the What Works Clearinghouse, a huge repository of evidence-based practices available to educators for free.

    And after promising not to make any cuts to the National Assessment of Educational Progress, known as the nation’s report card, the department canceled an upcoming test for 17-year-olds that helps researchers understand long-term trends. On Monday, Peggy Carr, the head of the National Center for Education Statistics, which oversees NAEP, was placed on leave.

    The Education Department did not respond to questions about who decided which programs to cut and what criteria were used. Nor did the department respond to a specific question about why Charting My Path was eliminated. DOGE records estimate the administration saved $22 million by terminating the program early, less than half the $54 million in the original contract.

    The decision has caused mid-year disruptions and uncertainty.

    In Utah, the Canyons School District is trying to reassign the school counselor and three teachers whose salaries were covered by the Charting My Path contract.

    The district, which had 88 high schoolers participating in the program, is hoping to keep using the curriculum to boost its usual services, said Kirsten Stewart, a district spokesperson.

    Officials in Poway Unified, too, hope schools can use the curriculum and tools to keep up a version of the program. But that will take time and work because the program’s four teachers had to be reassigned to other jobs.

    “They dedicated that time and got really important training,” McCrath-Smith said. “We don’t want to see that squandered.”

    For Damroth, the loss of parent support meetings through Charting My Path was especially devastating. Logan has a rare genetic mutation that causes him to fall asleep easily during the day, so Damroth wanted help navigating which colleges might be able to offer extra scheduling support.

    “I have a million questions about this. Instead of just hearing ‘I don’t know’ I was really looking forward to working with Joe and the program,” she said, referring to Logan’s former mentor. “It’s just heartbreaking. I feel like this wasn’t well thought out. … My child wants to do things in life, but he needs to be given the tools to achieve those goals and those dreams that he has.”

    DOGE cuts labs that helped ‘Mississippi Miracle’ in reading

    The dramatic improvement in reading proficiency that Carey Wright oversaw as state superintendent in one the nation’s poorest states became known as the “Mississippi Miracle.”

    Regional Educational Laboratory Southeast, based out of the Florida Center for Reading Research at Florida State University, was a key partner in that work, Wright said.

    When Wright wondered if state-funded instructional coaches were really making a difference, REL Southeast dispatched a team to observe, videotape, and analyze the instruction delivered by hundreds of elementary teachers across the state. Researchers reported that teachers’ instructional practices aligned well with the science of reading and that teachers themselves said they felt far more knowledgeable about teaching reading.

    “That solidified for me that the money that we were putting into professional learning was working,” Wright said.

    The study, she noted, arose from a casual conversation with researchers at REL Southeast: “That’s the kind of give and take that the RELs had with the states.”

    Wright, now Maryland state superintendent, said she was looking forward to partnering with REL Mid-Atlantic on a math initiative and on an overhaul of the school accountability system.

    But this month, termination letters went out to the universities and research organizations that run the 10 Regional Educational Laboratories, which were established by Congress in 1965 to serve states and school districts. The letters said the contracts were being terminated “for convenience.”

    The press release that went to news organizations cited “wasteful and ideologically driven spending” and named a single project in Ohio that involved equity audits as a part of an effort to reduce suspensions. Most of the REL projects on the IES website involve reading, math, career connections, and teacher retention.

    Jannelle Kubinec, CEO of WestEd, an education research organization that held the contracts for REL West and REL Northwest, said she never received a complaint or a request to review the contracts before receiving termination letters. Her team had to abruptly cancel meetings to go over results with school districts. In other cases, reports are nearly finished but cannot be distributed because they haven’t gone through the review process.

    REL West was also working with the Utah State Board of Education to figure out if the legislature’s investment in programs to keep early career teachers from leaving the classroom was making a difference, among several other projects.

    “This is good work and we are trying to think through our options,” she said. “But the cancellation does limit our ability to finish the work.”

    Given enough time, Utah should be able to find a staffer to analyze the data collected by REL West, said Sharon Turner, a spokesperson for the Utah State Board of Education. But the findings are much less likely to be shared with other states.

    The most recent contracts started in 2022 and were set to run through 2027.

    The Trump administration said it planned to enter into new contracts for the RELs to satisfy “statutory requirements” and better serve schools and states, though it’s unclear what that will entail.

    “The states drive the research agendas of the RELs,” said Sara Schapiro, the executive director of the Alliance for Learning Innovation, a coalition that advocates for more effective education research. If the federal government dictates what RELs can do, “it runs counter to the whole argument that they want the states to be leading the way on education.”

    Some terminated federal education research was nearly complete

    Some research efforts were nearly complete when they got shut down, raising questions about how efficient these cuts were.

    The American Institutes for Research, for example, was almost done evaluating the impact of the Comprehensive Literacy State Development program, which aims to improve literacy instruction through investments like new curriculum and teacher training.

    AIR’s research spanned 114 elementary schools across 11 states and involved more than 23,000 third, fourth, and fifth graders and their nearly 900 reading teachers.

    Researchers had collected and analyzed a massive trove of data from the randomized trial and presented their findings to federal education officials just three days before the study was terminated.

    “It was a very exciting meeting,” said Mike Garet, a vice president and institute fellow at AIR who oversaw the study. “People were very enthusiastic about the report.”

    Another AIR study that was nearing completion looked at the use of multi-tiered systems of support for reading among first and second graders. It’s a strategy that helps schools identify and provide support to struggling readers, with the most intensive help going to kids with the highest needs. It’s widely used by schools, but its effectiveness hasn’t been tested on a larger scale.

    The research took place in 106 schools and involved over 1,200 educators and 5,700 children who started first grade in 2021 and 2022. Much of the funding for the study went toward paying for teacher training and coaching to roll out the program over three years. All of the data was collected and nearly done being analyzed when DOGE made its cuts.

    Garet doesn’t think he and his team should simply walk away from unfinished work.

    “If we can’t report results, that would violate our covenant with the districts, the teachers, the parents, and the students who devoted a lot of time in the hope of generating knowledge about what works,” Garet said. “Now that we have the data and have the results, I think we’re duty-bound to report them.”

    This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at ckbe.at/newsletters.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • AI Support for Teachers

    AI Support for Teachers

    Collaborative Classroom, a leading nonprofit publisher of K–12 instructional materials, announces the publication of SIPPS, a systematic decoding program. Now in a new fifth edition, this research-based program accelerates mastery of vital foundational reading skills for both new and striving readers.

    Twenty-Five Years of Transforming Literacy Outcomes

    “As educators, we know the ability to read proficiently is one of the strongest predictors of academic and life success,” said Kelly Stuart, President and CEO of Collaborative Classroom. “Third-party studies have proven the power of SIPPS. This program has a 25-year track record of transforming literacy outcomes for students of all ages, whether they are kindergarteners learning to read or high schoolers struggling with persistent gaps in their foundational skills.

    “By accelerating students’ mastery of foundational skills and empowering teachers with the tools and learning to deliver effective, evidence-aligned instruction, SIPPS makes a lasting impact.”

    What Makes SIPPS Effective?

    Aligned with the science of reading, SIPPS provides explicit, systematic instruction in phonological awareness, spelling-sound correspondences, and high-frequency words. 

    Through differentiated small-group instruction tailored to students’ specific needs, SIPPS ensures every student receives the necessary targeted support—making the most of every instructional minute—to achieve grade-level reading success.

    SIPPS is uniquely effective because it accelerates foundational skills through its mastery-based and small-group targeted instructional design,” said Linda Diamond, author of the Teaching Reading Sourcebook. “Grounded in the research on explicit instruction, SIPPS provides ample practice, active engagement, and frequent response opportunities, all validated as essential for initial learning and retention of learning.”

    Personalized, AI-Powered Teacher Support

    Educators using SIPPS Fifth Edition have access to a brand-new feature: immediate, personalized responses to their implementation questions with CC AI Assistant, a generative AI-powered chatbot.

    Exclusively trained on Collaborative Classroom’s intellectual content and proprietary program data, CC AI Assistant provides accurate, reliable information for educators.

    Other Key Features of SIPPS, Fifth Edition

    • Tailored Placement and Progress Assessments: A quick, 3–8 minute placement assessment ensures each student starts exactly at their point of instructional need. Ongoing assessments help monitor progress, adjust pacing, and support grouping decisions.
    • Differentiated Small-Group Instruction: SIPPS maximizes instructional time by focusing on small groups of students with similar needs, ensuring targeted, effective teaching.
    • Supportive of Multilingual Learners: Best practices in multilingual learner (ML) instruction and English language development strategies are integrated into the design of SIPPS.
    • Engaging and Effective for Older Readers: SIPPS Plus and SIPPS Challenge Level are specifically designed for students in grades 4–12, offering age-appropriate texts and instruction to close lingering foundational skill gaps.
    • Multimodal Supports: Integrated visual, auditory, and kinesthetic-tactile strategies help all learners, including multilingual students.
    • Flexible, Adaptable, and Easy to Teach: Highly supportive for teachers, tutors, and other adults working in classrooms and expanded learning settings, SIPPS is easy to implement well. A wraparound system of professional learning support ensures success for every implementer.

    Accelerating Reading Success for Students of All Ages

    In small-group settings, students actively engage in routines that reinforce phonics and decoding strategies, practice with aligned texts, and receive immediate feedback—all of which contribute to measurable gains.

    “With SIPPS, students get the tools needed to read, write, and understand text that’s tailored to their specific abilities,” said Desiree Torres, ENL teacher and 6th Grade Team Lead at Dr. Richard Izquierdo Health and Science Charter School in New York. “The boost to their self-esteem when we conference about their exam results is priceless. Each and every student improves with the SIPPS program.” 

    Kevin Hogan
    Latest posts by Kevin Hogan (see all)

    Source link

  • The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    In December, The Wall Street Journal reported:

    [President-elect Donald Trump’s nominee to lead the National Institutes of Health] Dr. Jay Bhattacharya […] is considering a plan to link a university’s likelihood of receiving research grants to some ranking or measure of academic freedom on campus, people familiar with his thinking said. […] He isn’t yet sure how to measure academic freedom, but he has looked at how a nonprofit called Foundation for Individual Rights in Education scores universities in its freedom-of-speech rankings, a person familiar with his thinking said.

    We believe in and stand by the importance of the College Free Speech Rankings. More attention to the deleterious effect restrictions on free speech and academic freedom have on research at our universities is desperately needed, so hearing that they are being considered as a guidepost for NIH grantmaking is heartening. Dr. Bhattacharya’s own right to academic freedom was challenged by his Stanford University colleagues, so his concerns about its effect on NIH’s grants is understandable.

    However, our College Free Speech Rankings are not the right tool for this particular job. They were designed with a specific purpose in mind — to help students and parents find campuses where students are both free and comfortable expressing themselves. They were not intended to evaluate the climate for conducting academic research on individual campuses and are a bad fit for that purpose. 

    While the rankings assess speech codes that apply to students, the rankings do not currently assess policies pertaining to the academic freedom rights and research conduct of professors, who are the primary recipients of NIH grants. Nor do the rankings assess faculty sentiment about their campus climates. It would be a mistake to use the rankings beyond their intended purpose — and, if the rankings were used to deny funding for important research that would in fact be properly conducted, that mistake would be extremely costly.

    FIRE instead proposes three ways that would be more appropriate for NIH to use its considerable power to improve academic freedom on campus and ensure research is conducted in an environment most conducive to finding the most accurate results.

    1. Use grant agreements to safeguard academic freedom as a strong contractual right. 
    2. Encourage open data practices to promote research integrity.
    3. Incentivize universities to study their campus climates for academic freedom.

    Why should the National Institutes of Health care about academic freedom at all?

    The pursuit of truth demands that researchers be able to follow the science wherever it leads, without fear, favor, or external interference. To ensure that is the case, NIH has a strong interest in ensuring academic freedom rights are inviolable. 

    As a steward of considerable taxpayer money, NIH has an obligation to ensure it spends its funds on high-quality research free from censorship or other interference from politicians or college and university administrators.

    Why the National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to decide where to send funds

    FIRE’s College Free Speech Rankings (CFSR) were never intended for use in determining research spending. As such, it has a number of design features that make it ill-suited to that purpose, either in its totality or through its constituent parts.

    Firstly, like the U.S. News & World Report college rankings, a key reason for the creation of the CFSRs was to provide information to prospective undergraduate students and their parents. As such, it heavily emphasizes students’ perceptions of the campus climate over the perceptions of faculty or researchers. In line with that student focus, our attitude and climate components are based on a survey of undergraduates. Additionally, the speech policies that we evaluate and incorporate into the rankings are those that affect students. We do not evaluate policies that affect faculty and researchers, which are often different and would be of greater relevance to deciding research funding. While it makes sense that there may be some correlation, we have no way of knowing whether or the degree to which that might be true.

    Secondly, for the component that most directly implicates the academic freedom of faculty, we penalize schools for attempts to sanction scholars for their protected speech, as tracked in our Scholars Under Fire database. While our Scholars Under Fire database provides excellent datapoints for understanding the climate at a university, it does not function as a systematic proxy for assessing academic freedom on a given campus as a whole. As one example, a university with relatively strong protection for academic freedom may have vocal professors with unpopular viewpoints that draw condemnation and calls for sanction that could hurt its ranking, while a climate where professors feel too afraid to voice controversial opinions could draw relatively few calls for sanction and thus enjoy a higher ranking. This shortcoming is mitigated when considered alongside the rest of our rankings components, but as discussed above, those other components mostly concern students rather than faculty.

    Thirdly, using CFSR to determine NIH funding could — counterintuitively — be abused by vigilante censors. Because we penalize schools for attempted and successful shoutdowns, the possibility of a loss of NIH funding could incentivize activists who want leverage over a university to disrupt as many events as possible in order to negatively influence its ranking, and thus its funding prospects. Even the threat of disruption could thus give censors undue power over a university administration that fears loss of funding.

    Finally, due to resource limitations, we do not rank all research universities. It would not be fair to deny funding to an unranked university or to fund an unranked university with a poor speech climate over a low-ranked university.

    Legal boundaries for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    While NIH has considerable latitude to determine how it spends taxpayer money, as an arm of the government, the First Amendment places restrictions on how NIH may use that power. Notably, any solution must not penalize institutions for protected speech or scholarship by students or faculty unrelated to NIH granted projects. NIH could not, for example, require that a university quash protected protests as a criteria for eligibility, or deny a university eligibility because of controversial research undertaken by a scholar who does not work on NIH-funded research.

    While NIH can (and effectively must) consider the content of applications in determining what to fund, eligibility must be open to all regardless of viewpoint. Even were this not the case as a constitutional matter (and it is, very much so), it is important as a prudential matter. People would be understandably skeptical of, if not downright disbelieve, scientific results obtained through a grant process with an obvious ideological filter. Indeed, that is the root of much of the current skepticism over federally funded science, and the exact situation academic freedom is intended to avoid.

    Additionally, NIH cannot impose a political litmus test on an individual or an institution, or compel an institution or individual to take a position on political or scientific issues as a condition of grant funding.

    In other words, any solution to improve academic freedom:

    • Must be viewpoint neutral;
    • Must not impose an ideological or political litmus test; and
    • Must not penalize an institution for protected speech or scholarship by its scholars or students.

    Guidelines for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    NIH should carefully tailor any solution to directly enhance academic freedom and to further NIH’s goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science.” Going beyond that purpose to touch on issues and policies that don’t directly affect the conduct of NIH grant-funded research may leave such a policy vulnerable to legal challenge.

    Any solution should, similarly, avoid using vague or politicized terms such as “wokeness” or “diversity, equity, and inclusion.” Doing so creates needless skepticism of the process and — as FIRE knows all too well — introduces uncertainty as professors and institutions parse what is and isn’t allowed.

    Enforcement mechanisms should be a function of contractual promises of academic freedom, rather than left to apathetic accreditors or the unbounded whims of bureaucrats on campus or officials in government, for several reasons. 

    Regarding accreditors, FIRE over the years has reported many violations of academic freedom to accreditors who require institutions to uphold academic freedom as a precondition for their accreditation. Up to now, the accreditors FIRE has contacted have shown themselves wholly uninterested in enforcing their academic freedom requirements.

    When it comes to administrators, FIRE has documented countless examples of campus administrators violating academic freedom, either due to politics, or because they put the rights of the professor second to the perceived interests of their institution.

    As for government actors, we have seen priorities and politics shift dramatically from one administration to the next. It would be best for everyone involved if NIH funding did not ping-pong between ideological poles as a function of each presidential election, as the Title IX regulations now do. Dramatic changes to how NIH conceives as academic freedom with every new political administration would only create uncertainty that is sure to further chill speech and research.

    While the courts have been decidedly imperfect protectors of academic freedom, they have a better record than accreditors, administrators, or partisan government officials in parsing protected conduct from unprotected conduct. And that will likely be even more true with a strong, unambiguous contractual promise of academic freedom. Speaking of which…

    The National Institutes of Health should condition grants of research funds on recipient institutions adopting a strong contractual promise of academic freedom for their faculty and researchers

    The most impactful change NIH could enact would be to require as a condition of eligibility that institutions adopt strong academic freedom commitments, such as the 1940 Statement of Principles on Academic Freedom and Tenure or similar, and make those commitments explicitly enforceable as a contractual right for their faculty members and researchers.

    The status quo for academic freedom is one where nearly every institution of higher education makes promises of academic freedom and freedom of expression to its students and faculty. Yet only at public universities, where the First Amendment applies, are these promises construed with any consistency as an enforceable legal right. 

    Private universities, when sued for violating their promises of free speech and academic freedom, frequently argue that those promises are purely aspirational and that they are not bound by them (often at the same time that they argue faculty and students are bound by the policies). 

    Too often, courts accept this and universities prevail despite the obvious hypocrisy. NIH could stop private universities’ attempts to have their cake and eat it too by requiring them to legally stand by the promises of academic freedom that they so readily abandon when it suits them.

    NIH could additionally require that this contractual promise come with standard due process protections for those filing grievances at their institution, including:

    • The right to bring an academic freedom grievance before an objective panel;
    • The right to present evidence;
    • The right to speedy resolution;
    • The right to written explanation of findings including facts and reasons; and
    • The right to appeal.

    If the professor exhausts these options, they may sue for breach of the contract. To reduce the burden of litigation, NIH could require that, if a faculty member prevails in a lawsuit over a violation of academic freedom, the violating institution would not be eligible for future NIH funding until they pay the legal fees of the aggrieved faculty member.

    NIH could also study violations of academic freedom by creating a system for those connected to NIH-funded research to report violations of academic freedom or scientific integrity.

    It would further be proper for NIH to require institutions to eliminate any political litmus tests, such as mandatory DEI statements, as a condition of grant eligibility.

    The National Institutes of Health can implement strong measures to protect transparency and integrity in science

    NIH could encourage open science and transparency principles by heavily favoring studies that are pre-registered. Additionally, to obviate concerns that scientific results may be suppressed or buried because they are unpopular or politically inconvenient, NIH could require its grant-funded research to make available data (with proper privacy safeguards) following the completion of the project. 

    To help deal with the perverse incentives that have created the replication crisis and undermined public trust in science, NIH could create impactful incentives for work on replications and the publication of null results.

    Finally, NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.

    The National Institutes of Health can incentivize study into campus climates for academic freedom

    As noted before, FIRE’s College Free Speech Rankings focus on students. Due to logistical and resource difficulties surveying faculty, our 2024 Faculty Report looking into many of the same issues took much longer and had to be limited in scope to 55 campuses, compared to the 250+ in the CFSR. This is to say there is a strong need for research to understand faculty views and experiences on academic freedom. After all, we cannot solve a problem until we understand it. To that effect, NIH should incentivize further study into faculty’s academic freedom.

    It is important to note that these studies should be informational and not used in a punitive manner, or to decide on NIH funding eligibility. This is because tying something as important as NIH funding to the results of the survey would create so significant an incentive to influence the results that the data would be impossible to trust. Even putting aside malicious interference by administrators and other faculty members, few faculty would be likely to give honest answers that imperiled institutional funding, knowing the resulting loss in funding might threaten their own jobs.

    Efforts to do these kinds of surveys in Wisconsin and Florida proved politically controversial, and at least initially, led to boycotts, which threatened to compromise the quality and reliability of the data. As such, it’s critical that any such survey be carried out in a way that maximizes trust, under the following principles:

    • Ideally, the administration of these surveys should be done by an unbiased third party — not the schools themselves, or NIH. This third party should include respected researchers across the political spectrum and no partisan slant.
    • The survey sample must be randomized and not opt-in.
    • The questionnaire must be made public beforehand, and every effort should be made for the questions to be worded without any overt partisanship or ideology that would reduce trust.

    Conclusion: With great power…

    FIRE has for the last two decades been America’s premier defender of free speech and academic freedom on campus. Following Frederick Douglass’s wise dictum, “I would unite with anybody to do right and with nobody to do wrong,” we’ve worked with Democrats, Republicans, and everyone in between (and beyond) to advance free speech and open inquiry, and we’ve criticized them in turn whenever they’ve threatened these values.

    With that sense of both opportunity and caution, we would be heartened if NIH used its considerable power wisely in an effort to improve scientific integrity and academic freedom. But if wielded recklessly, that same considerable power threatens to do immense damage to science in the process. 

    We stand ready to advise if called upon, but integrity demands that we correct the record if we believe our data is being used for a purpose to which it isn’t suited.

    Source link

  • OpenAI invests $50M in higher ed research

    OpenAI invests $50M in higher ed research

    OpenAI announced Tuesday that it’s investing $50 million to start up NextGenAI, a new research consortium of 15 institutions that will be “dedicated to using AI to accelerate research breakthroughs and transform education.”

    The consortium, which includes 13 universities, is designed to “catalyze progress at a rate faster than any one institution would alone,” the company said in a news release.

    “The field of AI wouldn’t be where it is today without decades of work in the academic community. Continued collaboration is essential to build AI that benefits everyone,” Brad Lightcap, chief operating officer of OpenAI, said in the news release. “NextGenAI will accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI.”

    The company, which launched ChatGPT in late 2022, will give each of the consortium’s 15 institutions—including Boston Children’s Hospital and the Boston Public Library—millions in funding for research and access to computational resources as part of an effort “to support students, educators, and researchers advancing the frontiers of knowledge.” 

    Institutional initiatives supported by NextGenAI vary widely but will include projects focused on AI literacy, advancing medical research, expanding access to scholarly resources and enhancing teaching and learning. 

    The universities in the NextGenAI consortium are: 

    • California Institute of Technology
    • California State University system
    • Duke University
    • University of Georgia
    • Harvard University
    • Howard University
    • Massachusetts Institute of Technology
    • University of Michigan
    • University of Mississippi
    • Ohio State University
    • University of Oxford (U.K.)
    • Sciences Po (France)
    • Texas A&M University

    Source link

  • Building inclusive research cultures– How can we rise above EDI cynicism?

    Building inclusive research cultures– How can we rise above EDI cynicism?

    • Dr Elizabeth Morrow is Research Consultant, Senior Research Fellow Royal College of Surgeons in Ireland, & Public Contributor to the Shared Commitment to Public Involvement on behalf of National Institute for Health and Care Research.
    • Professor Tushna Vandrevala is Professor of Health Psychology, Kingston University.
    • Professor Fiona Ross CBE is Professor Emerita Health and Social Care Kingston University, Deputy Chair Westminster University Court of Governors & Trustee Great Ormond Street Hospital Charity.

    Commitment and Motivation for Inclusive Research

    The commitment to inclusivity in UK research cultures and practices will endure despite political shifts abroad and continue to thrive. Rooted in ethical and moral imperatives, inclusivity is fundamentally the right approach. Moreover, extensive evidence from sources such as The Lancet, UNESCO and WHO highlights the far-reaching benefits of inclusive research practices across sectors like healthcare and global development. These findings demonstrate that inclusivity not only enhances research quality but also fosters more equitable outcomes.

    We define ‘inclusive research’ as the intentional engagement of diverse voices, communities, perspectives, and experiences throughout the research process. This encompasses not only who conducts the research but also how it is governed, funded, and integrated into broader systems, such as policy and practice.

    Beyond higher education, corporate leaders have increasingly embraced inclusivity. Research by McKinsey & Company shows that companies in the top quartile for gender diversity are 25% more likely to outperform their peers in profitability, while those leading in ethnic diversity are 36% more likely to do so. This clear link between inclusivity, innovation, and financial success reinforces the value of diverse teams in driving competitive advantage. Similarly, Egon Zehnder’s Global Board Diversity Tracker highlights how diverse leadership enhances corporate governance and decision-making, leading to superior financial performance and fostering innovation.

    Inclusion in research is a global priority as research systems worldwide have taken a ‘participative turn’ to address uncertainty and seek solutions to complex challenges such as Sustainable Development Goals. From climate change to the ethical and societal implications of Artificial Intelligence (AI), inclusive research is a track that ensures that diverse perspectives shape solutions that are effective, fair and socially responsible.

    Take the example of AI and gender bias – evidence shows that women are frequently not included in technology research and are underrepresented in data sets. This creates algorithms that are biased and can have negative consequences of sensitivity, authenticity, or uptake of AI-enabled interventions by women. Similar biases in AI have been found for other groups who are often overlooked because of their age, gender, sexuality, disability, or ethnicity, for example.

    Accelerating Inclusion in UK Research

    A recent horizon scan of concepts related to the UK research inclusion landscape indicates domains in which inclusive research is being developed and implemented, illustrated by Figure 1.

    Inclusion is being accelerated by the Research Excellence Framework (REF) 2029, with a stronger focus on assessing People, Culture, and Environment (PCE). REF 2029 emphasises the integration of EDI considerations across research institutions, with a focus on creating equitable and supportive cultures for researchers, participants and communities. The indicators and measures of inclusion that will be developed and used are important because they can encourage diversity of perspectives, knowledge, skills and worldviews into research processes and institutions, thereby increasing relevance and improved outcomes. All units of assessment and panels involved in the REF process will have guidance from the People and Diversity Advisory Panel and the Research Diversity Advisory Panel. This means that inclusion will develop in both the culture of research institutions and the practices that shape research assessment.

    The National Institute for Health Research, which is the largest funder of health and social care research, has pioneered inclusion for over 30 years and prioritises inclusion in its operating principles (see NIHR Research Inclusion Strategy 2022-2027). NIHR’s new requirements for Research Inclusion (RI) will be a powerful lever to address inequalities in health and care. NIHR now requires all its domestic commissioned research to address RI at the proposal stage, actively involve appropriate publics, learn from them and use this learning to inform impact strategies and practices.

    Given the learning across various domains, we ask: How can the broader UK system share knowledge and learn from the setbacks and successes in inclusion, rather than continually reinventing the wheel? By creating space in the system between research funders and institutions to share best practices, such as the Research Culture Enablers Network, we can accelerate progress and contribute to scaling up inclusive research across professional groups and disciplines. There are numerous examples of inclusive innovation, engaged research, and inclusive impact across disciplines and fields that could be shared to accelerate inclusion.

    Developing Shared Language and Inclusive Approaches

    Approaches to building inclusive cultures in research often come with passion and commitment from opinion leaders and change agents. As often happens when levering change, a technical language evolves that can become complex and, therefore, inaccessible to others. For example, acronyms like RI can apply to research inclusion, research integrity and responsible innovation. Furthermore, community-driven research, public and community engagement, and Patient and Public Involvement (PPI) have become synonymous with inclusive research, and such participation is an important driver of inclusion.

    The language and practices associated with inclusive research vary by discipline to reflect different contexts and goals. This can confuse rather than clarify and form barriers that possibly get in the way of trust and more effective inclusion strategies and practices. We ask: How can we establish shared understanding, methods of participation, accountability pathways and mechanisms that will promote inclusion in the different and dynamic contexts of UK research?

    With over 20 years of experience in the fields of inclusion and equity, like other researchers, we have found that interdisciplinary collaboration, participatory methods, co-production, and co-design offer valuable insights by listening to and engaging with publics and communities on their own terms and territory. An inclusive approach has deepened our understanding and provided new perspectives on framing, methodological development, and the critical interpretation of research.

    Final reflection

    Key questions to overcome EDI cynicism are: How can we deepen our understanding and integration of intersectionality, inclusive methods, open research, cultural competency, power dynamics, and equity considerations throughout research processes, institutions, and systems? There is always more to learn and this can be facilitated by inclusive research cultures.

    Figure 1. Inclusive Research Dimensions

    Source link

  • How will cutting NAEP for 17-year-olds impact postsecondary readiness research?

    How will cutting NAEP for 17-year-olds impact postsecondary readiness research?

    This audio is auto-generated. Please let us know if you have feedback.

    With the U.S. Department of Education’s cancellation of the National Assessment of Educational Progress for 17-year-olds, education researchers are losing one resource for evaluating post-high school readiness — though some say the test was already a missed opportunity since it hadn’t been administered since 2012.

    The department cited funding issues in its cancellation of the exam, which had been scheduled to take place this March through May.

    Since the 1970s, NAEP has monitored student performance in reading and math for students ages 9, 13 and 17. These assessments — long heralded as The Nation’s Report Card — measure students’ educational progress over long periods to identify and monitor trends in academic performance.

    The cancellation of the NAEP Long-Term Trend assessment for 17-year-olds came just days before the Trump administration abruptly placed Peggy Carr, commissioner of the National Center for Education Statistics and as such, the public voice of NAEP, on paid leave.

    Carr has worked for the Education Department and NCES for over 30 years through both Republican and Democratic administrations. President Joe Biden appointed her NCES commissioner in 2021, with a term to end in 2027.

    The decision to drop the 2025 NAEP for 17-year-olds also follows another abrupt decision by the Education Department and the Department of Government Efficiency, or DOGE, to cut about $881 million in multi-year education research contracts earlier this month. The Education Department had previously said NAEP would be excluded from those cuts.

    Compounding gaps in data

    “The cancellation of the Long-Term Trend assessment of 17-year-olds is not unprecedented,” said Madi Biedermann, deputy assistant secretary for communications for the Education Department, in an email.

    The assessment was supposed to be administered during the 2019-20 academic year, but COVID-19 canceled those plans.

    Some experts questioned the value of another assessment for 17-year-olds since the last one was so long ago.

    While longitudinal studies are an important tool for tracking inequity and potential disparities in students, the NAEP Long-Term Trend Age 17 assessment wasn’t able to do so because data hadn’t been collected as planned for more than a decade, according to Leigh McCallen, deputy executive director of research and evaluation at New York University Metropolitan Center for Research on Equity and the Transformation of Schools.

    “There weren’t any [recent] data points before this 2024 point, so in some ways it had already lost some of its value, because it hadn’t been administered,” McCallen said.

    McCallen added that she is more concerned about maintaining the two-year NAEP assessments for 9- and 13-year-olds, because their consistency over the years provides a random-sample temperature check.

    According to the Education Department’s Biedermann, these other longitudinal assessments are continuing as normal.

    Cheri Fancsali, executive director at the Research Alliance for New York City Schools, said data from this year’s 17-year-olds would have provided a look at how students are rebounding from the pandemic. Now is a critical time to get the latest update on that level of information, she said.

    Fancsali pointed out that the assessment is a vital tool for evaluating the effectiveness of educational policies and that dismantling these practices is a disservice to students and the public. She said she is concerned about the impact on vulnerable students, particularly those from low-income backgrounds and underresourced communities.

    “Without an assessment like NAEP, inequities become effectively invisible in our education system and, therefore, impossible to address,” Fancsali said. 

    While tests like the ACT or SAT are other indicators of post-high-school readiness at the national level, Fancsali said they offer a “skewed perspective,” because not every student takes them.

    “The NAEP is the only standard assessment across states and districts, so it gives the ability to compare over time in a way that you can’t with any other assessment at the local level,” Fancsali said.

    Fancsali emphasized the importance for parents, educators and policymakers to advocate for the need for an assessment like NAEP for both accountability and transparency.

    LIkewise, McCallen said that despite the lack of continuity in the assessment for 17-year-olds, its cancellation offers cause for concern.

    “It represents the seriousness of what’s going on,” McCallen said. “When you cancel these contracts, you really do lose a whole set of information and potential knowledge about students throughout this particular point of time.”

    Source link

  • How cuts at U.S. aid agency hinder university research

    How cuts at U.S. aid agency hinder university research

    Peter Goldsmith knows there’s a lot to love about soybeans. Although the crop is perhaps best known in America for its part in the stereotypically bougie soy milk latte, it plays an entirely different role on the global stage. Inexpensive to grow and chock-full of nutrients, it’s considered a potential solution to hunger and malnutrition.

    For the past 12 years, Goldsmith has worked toward that end. In 2013, he founded the Soybean Innovation Lab at the University of Illinois at Urbana-Champaign, and every day since then, the lab’s scientists have worked to help farmers and businesses solve problems related to soybeans, from how to speed up threshing—the arduous process of separating the bean from the pod—to addressing a lack of available soybean seeds and varieties.

    The SIL, which now encompasses a network of 17 laboratories, has completed work across 31 countries, mostly in sub-Saharan Africa. But now, all that work is on hold, and Goldsmith is preparing to shut down the Soybean Innovation Lab in April, thanks to massive cuts to the federal foreign aid funds that support the labs.

    A week into the current presidential administration, Goldsmith received notice that the Soybean Innovation Lab, which is headquartered at the University of Illinois, had to pause operations, cease external communications and minimize costs, pending a federal government review.

    Goldsmith told his team—about 30 individuals on UIUC’s campus that he described as being like family to one another—that, though they were ordered to stop work, they could continue working on internal projects, like refining their software. But days later, he learned the university could no longer access the lab’s funds in Washington, meaning there was no way to continue paying employees.

    After talking with university administrators, he set a date for the Illinois lab to close: April 15, unless the freeze ended after the government review. But no review materialized; on Feb. 26, the SIL received notice its grant had been terminated, along with about 90 percent of the U.S. Agency for International Development’s programs.

    “The University of Illinois is a very kind, caring sort of culture; [they] wanted to give employees—because it was completely an act of God, out of the blue—give them time to find jobs,” he said. “I mean, up until [Jan. 27], we were full throttle, we were very successful, phones ringing off the hook.”

    The other 16 labs will likely also close, though some are currently scrambling to try to secure other funding.

    Federal funding made up 99 percent of the Illinois lab’s funding, according to Goldsmith. In 2022, the lab received a $10 million grant intended to last through 2027.

    Dismantling an Agency

    The SIL is among the numerous university laboratories impacted by the federal freeze on U.S. Agency for International Development funds—an initial step in what’s become President Donald Trump’s crusade to curtail supposedly wasteful government spending—and the subsequent termination of thousands of grants.

    Trump and Elon Musk, the richest man on Earth and a senior aide to the president, have baselessly claimed that USAID is run by left-wing extremists and say they hope to shutter the agency entirely. USAID’s advocates, meanwhile, have countered that the agency instead is responsible for vital, lifesaving work abroad and that the funding freeze is sure to lead to disease, famine and death.

    A federal judge, Amir H. Ali, seemed to agree, ruling earlier this month that the funding freeze is doing irreparable harm to humanitarian organizations that have had to cut staff and halt projects, NPR and other outlets reported. On Tuesday, Ali reiterated his order that the administration resume funding USAID, giving them until the end of the day Wednesday to do so.

    But the administration appealed the ruling, and the Supreme Court subsequently paused the deadline until the justices can weigh in. Now, officials appear to be moving forward with plans to fire all but a small number of the agency’s employees, directing employees to empty their offices and giving them only 15 minutes each to gather their things.

    About $350 million of the agency’s funds were appropriated to universities, according to the Association of Public and Land-grant Universities, including $72 million for the Feed the Future Innovation Labs, which are aimed at researching solutions to end hunger and food insecurity worldwide. (The SIL is funded primarily by Feed the Future.)

    It’s a small amount compared to the funding universities receive from other agencies, like the National Institutes of Health, also the subject of deep cuts by Trump and Musk. But USAID-funded research is a long-standing and important part of the nation’s foreign policy, as well as a resource for the international community, advocates say. The work also has broad, bipartisan support; in fiscal year 2024, Congress increased funding for the Feed the Future Initiative labs by 16 percent, according to Craig Lindwarm, senior vice president for government affairs at the APLU, even in what he characterized as an extremely challenging budgetary environment.

    Potential Long-Term Harms

    Universities “have long been a partner with USAID … to help accomplish foreign policy and diplomatic goals of the United States,” said Lindwarm. “This can often but not exclusively come in the form of extending assistance as it relates to our agricultural institutions, and land-grant institutions have a long history of advancing science in agriculture that boosts yields and productivity in the United States and also partner countries, and we’ve found that this is a great benefit not just to our country, but also partner nations. Stable food systems lead to stable regions and greater market access for producers in the United States and furthers diplomatic objectives in establishing stronger connections with partner countries.”

    Stopping that research has negatively impacted “critical relationships and productivity,” with the potential for long-term harms, Lindwarm said.

    At the SIL, numerous projects have now been canceled, including a planned trip to Africa to beta test a pull-behind combine, a technology that is not commonly used anymore in the U.S.—most combines are now self-propelled rather than pulled by tractor—but that would be useful to farmers in Africa. A U.S. company was slated to license the technology to farmers in Africa, Goldsmith said, but now, “that’s dead. The agribusiness firm, the U.S. firm, won’t be licensing in Africa,” he said. “A good example of market entry just completely shut off.”

    He also noted that the lab closures won’t just impact clients abroad and U.S. companies; they will also be detrimental to UIUC, which did not respond to a request for comment.

    “In our space, we’re well-known. We’re really relevant. It makes the university extremely relevant,” he said. “We’re not an ivory tower. We’re in the dirt, literally, with our partners, with our clients, making a difference, and [that] makes the university an active contributor to solving real problems.”

    Source link

  • New research questions DOGE claims about ED cut savings

    New research questions DOGE claims about ED cut savings

    New research suggests that the Department of Government Efficiency has been making inaccurate claims about the extent of its savings from cuts to the Department of Education.

    DOGE previously posted on X that it ended 89 contracts from the Education Department’s research arm, the Institute of Education Sciences, worth $881 million. But an analysis released Wednesday by the left-wing think tank New America found that these contracts were worth about $676 million—roughly $200 million less than DOGE claimed. DOGE’s “Wall of Receipts” website, where it tracks its cuts, later suggested the savings from 104 Education Department contracts came out to a more modest $500 million.

    New America also asserted that DOGE is losing money, given that the government had already spent almost $400 million on the now-terminated Institute of Education Sciences contracts, meaning those funds have gone to waste.

    “Research cannot be undone, and statistics cannot be uncollected. Instead, they will likely sit on a computer somewhere untouched,” New America researchers wrote in a blog post about their findings.

    In a separate analysis shared last week, the American Enterprise Institute, a right-leaning think tank, also called into question DOGE’s claims about its Education Department cuts.

    Nat Malkus, senior fellow and deputy director of education policy studies at AEI, compared DOGE’s contract values with the department’s listed values and found they “seldom matched” and DOGE’s values were “always higher,” among other problems with DOGE’s data.

    “DOGE has an unprecedented opportunity to cut waste and bloat,” Malkus said in a post about his research. “However, the sloppy work shown so far should give pause to even its most sympathetic defenders.”

    Source link

  • Engaging Students in Collaborative Research and Writing Through Positive Psychology, Student Wellness, and Generative AI Integration – Faculty Focus

    Engaging Students in Collaborative Research and Writing Through Positive Psychology, Student Wellness, and Generative AI Integration – Faculty Focus

    Source link

  • Research flexibility doesn’t have to mean researcher precarity

    Research flexibility doesn’t have to mean researcher precarity

    If we think about research as a means of driving innovation and by extension economic growth, there is a need to consider the lives of the people who are doing the research.

    The UK has a significant strength in the quality and diversity of its higher education system – which trains a large proportion of the staff who end up working in universities (and elsewhere) performing research. We should, in other words, be better than we are at sustaining and shaping research capacity through supporting the people who contribute research throughout their careers.

    Certainly, that’s the case that the University and College Union makes in last week’s research staff manifesto – noting that nearly two thirds of research staff are on fixed term contracts, following research funding and strategic decisions around the country at a significant detriment to their personal lives and professional development.

    How does the system work?

    Precarity is not an accident of the system – it is the entire design of the system. Becoming a postdoc is not the final stage of the undergraduate to postgraduate to researcher pipeline: it is a step into a new system where the trial of job-hopping, house moving, city shifting work, may one day lead to a full time post.

    The first step after undertaking a doctorate is, unsurprisingly post-doctoral work – the postdoc. The term is confusing as it implies simply the job someone does after being awarded a PhD. Over time the taxonomy has changed to take on a specific meaning. It has become synonymous with precarious employment tied to grant funding. As an example, Imperial College London describes their postdocs as follows

    • a member of staff who will have a PhD, and be employed to undertake research
    • commonly on an externally funded grant secured by their principal investigator (PI) e.g. Research Council standard grant
    • responsible for their own career development but entitled to the support of their PI and the PFDC
    • entitled to 10 days development per year
    • entitled to 25 days leave plus bank holidays and college closure dates (if full time, pro-rata for part time)
    • entitled to regular one-to-one meetings with their line manager
    • entitled to a mid and final probation review
    • entitled to a Personal Review and Development Plan (PRDP) meeting once per year

    Crucially, in the section which describes what a postdoc is not, it includes being “a permanent member of academic staff.”

    This is often the case because postdocs are tied to grant funding and grant funding is limited to a certain period of time to cover a specific project. UKRI, for example, does not fund postdocs directly but funds research organisations directly through a mix of focused studentships and capacity funding. Research organisations then fund postdocs.

    This means that the flexible deployment of resources is the very start of the system. It’s not an accident or a quirk, it is that the UK’s research system is built around incentivising human capital to move to the organisations and places that most closely aligns to their research skills. The upside of this is that, in theory, it should mean resources are efficiently deployed to the people and places that can use them most productively. In reality, it means that instability and structural barriers to progressing to full research contracts are the norm.

    It’s not that UKRI are not aware of this problem. In a 2023 blog on team research Nik Ogryzko, Talent Programme Manager at UKRI, wrote that

    We’ve built a system where research groups sometimes act as their own small business inside an institution. And this leads to a very particular set of weaknesses.

    Employment contracts have become linked to individual research grants, with research staff often highly dependent on their principal investigator for career progression, or even their continued employment.

    Group leaders are often not equipped to support their staff into anything other than an academic career, and we know most research staff do not end up there.

    We also know such precarious employment and power imbalances can in some cases lead to bullying, harassment and discrimination. Such structural factors further compromise the integrity of our research, despite the strong intrinsic motivation of our researchers and innovators.“

    A number of institutions are signatories to The Concordat to Support the Career Development of Researchers. When it comes to the use of fixed-term contracts the concordat states that

    […]some of the areas of most concern to researchers, such as the prevalence of fixed-term contracts and enforced mobility, will require long term systemic changes, which can only be realised through collective action across stakeholders.

    Again, should a researcher be lucky enough to pass through their postdoc a permanent role is not guaranteed or even the norm. In reading through the websites of universities the reasons for fixed term contracts are various including; to align with grant-funding, to cover peak demand, to meet uncertain demand, to cover staff absence, to cover time-limited projects, secondments, training, and to bring in specialist skills.

    It is not that universities don’t recognise the issue of fixed term contracts, institutions like the University of Exeter has a whole framework on the appropriate use of these contacts, it’s that in a funding system which places a premium on project working it is necessary to have a highly flexible staff force.

    However, this does not mean that this system is inevitable or that the number of fixed term contracts is desirable.

    What is going on?

    According to HESA data, that number is slowly falling – both numerically and proportionally – for research only academic staff. As of the 2023-24 academic year, 63.9 per cent of “research only” academic staff (64,265) are on a fixed term contract. This sounds like a lot, but it is down slightly from a peak of 68 per cent (70,050) in 2019-20.

    [Full screen]

    The proportion of fixed term contracts for teaching only academics (another prominent early career route, often coupled with weekends at the kitchen table writing literature reviews for publication in an attempt to bolster credentials for a research job in an underfunded field) is also on a downward trajectory. Some 44.3 per cent of teaching only contracts (equating to 64,300 people) were fixed term in 2019-20 – by 2023-24 the numbers were 35.7 per cent and 63,425.

    If we take this to provider level we can see that a significant research focus is no predictor of a reliance on fixed term contracts. This chart shows the proportion of all academic staff on research only contracts on the y axis, with the proportion of all academic staff on fixed term contracts on the x axis.

    [Full screen]

    What this chart shows is that a strong focus on research (with many research only academic contracts) does not predict a reliance on fixed term contracts – indeed, there are many providers with a significant proportion of fixed term contracts that have no research only academic staff at all. While a fixed term contract is a poor basis on which to plan long term as an individual, for many higher education institutions it is a useful answer to wildly varying income and recruitment. Whereas for more traditional institutions it makes sense to maintain capacity even as prevailing conditions worsen, in smaller and more precarious providers unutilised capacity is a luxury that is no longer as affordable.

    If you look back to the first chart, you may notice a “salary source” filter. One of the prevailing narratives around fixed term contracts is that these necessarily link to the “fixed term” nature of funded research projects – the argument being that once the money is finished, the staff need to find new jobs. In fact, this is less of a factor than you might imagine: the proportions of research only academic staff on fixed term contracts is higher for externally funded than those funded internally, but the difference isn’t huge.

    Plotting the same data another way shows us that around a quarter of research only salaries are funded entirely by the higher education provider, with a further five per cent or so partially supported by the host institution – these figures are slightly lower for fixed-term research only staff, but only very slightly.

    [Full screen]

    So we can be clear that fixed term salaries are (broadly) a research thing, but there’s not really evidence to suggest that short term external funding is the whole reason for this.

    As a quick reminder, the research councils represent about a quarter of all external research funding, with the UK government (in various forms) and the NHS representing about another (swiftly growing)fifth. That’s a hefty chunk of research income that comes from sources that the government has some degree of control over – and some of the language used by Labour before the election about making this more reliable (the ten year settlements of legend) was seen as a recognition of the way funding could be reprofiled to allow for more “livable” research careers and an expansion of research capacity.

    [Full screen]

    This chart also allows you to examine the way these proportions land differently by provider and subject area (expressed here as HESA cost code). The volatility is higher at smaller providers, as you might expect – while research in the arts and humanities is more likely to be funded by research councils than in STEM or social sciences. But it is really the volume, rather than the source, of research funding that determines how researcher salaries are paid.

    Although the established pathway from research postgraduate to research is by no means the only one available (many postgraduate research students do not become academics) it is an established maxim – dating back to the post-war Percy and Barlow reviews – that to produce the researchers we need requires training in the form of postgraduate research provision.

    Although it’s not really the purpose of this article, it is worth considering the subject and provider level distribution of postgraduate research students in the light of how funding and capacity for research is distributed. As the early research career is often dominated by the need to move to take on a fixed term contract, one way to address this might be to have research career opportunities and research students in the same place from the start.

    [Full screen]

    What can we learn from this?

    Research capacity, and – for that matter – research training capacity, can’t be turned off and on at a whim. Departments and research centres need more than one short-term funded project to begin delivering for the UK at their full potential, because developing capacity and expertise takes time and experience. That’s a part of the reason why we have non-ringfenced funding: streams like those associated with QR in England – to keep research viable between projects, and to nurture developing expertise so it can contribute meaningfully to national, regional, and industrial research priorities. It’s funds like these that support researcher training and supervision, and the infrastructure and support staff and components that make research possible.

    But what the data suggests is that while the short-term nature of project funding does have an impact, especially at smaller providers and emerging research centres, there are many universities that are able to sustain research employment between projects. A part of this is bound to be sheer scale, but it doesn’t happen at all large research performing organisations by any stretch of the imagination. A part of the answer then, must be the strategic decisions and staffing priorities that makes sustaining researcher employment possible.

    That’s not to let the funding side of the equation off the hook either. There is a sense that the Labour party was moving in the right direction in considering longer term research funding settlements – but we have yet to learn how this will work in practice. By its very nature, research is discovery and opportunity led: a few years ago artificial intelligence research was a minor academic curiosity, currently it is big money – but will it be a priority in 2035? Could there be some areas – medical and healthcare research, large scale physics, engineering – where we can be more sure than others?

    You’ll note we didn’t mention the arts, humanities, and social sciences in that list – but these may be some of the most valuable areas of human activity, and government-supported research plays a more prominent role in sustaining not just discovery and innovation but the actual practice of such activity. Such is the paucity of money available in the arts that many practitioners subsidise their practice with research and teaching – and it feels like arts funding more generally needs consideration.

    Sure, the UK punches above its weight in the sciences and in health care – but in arts, heritage, and social policy the work of the UK is genuinely world leading. It has a significant economic impact (second only to financial services) too. Research funding is a part of the picture here, but a long term commitment to these industries would be one of the most valuable decisions a government can make.

    What are the other choices?

    The fundamental challenge is maintaining a system which is dynamic, where the dynamism is not solely reliant on a highly transient workforce. A simple, albeit extremely limited, conclusion from the data would be that there is too great a supply of researchers to meet the demand for their skills.

    The more important question is what is the value of such a highly educated workforce and how can society make the most of their talents. This is not to say the UK should operate a supply led model. A world where funding is allocated based purely on the academic interests of researchers might be good for placing emphasis on intellectual curiosity but it would not allow funders to match social and economic priorities with researcher’s work. Put another way, it isn’t sufficient to tackle climate change by hoping enough researchers are interested in doing so. It would also not necessarily create more permanent jobs – just different ones.

    Conversely, a system which is largely demand led loses talent in other ways. The sheer exhaustion of moving between jobs and tacking research skills to different projects in the same field means stamina, not just research ability, is a key criterion for success. This means researchers whose abilities are needed are not deployed because their personal incentive for a more stable life trumps their career aspirations.

    The current system does penalise those who cannot work flexibly for extended periods of time, but more fundamentally the incentives in the system are misaligned to what it hopes to achieve. There can be no dynamism without some flexibility, but flexibility should be demonstrable not permanently designed. Flexibility of employment should be used to achieve a research benefit not only an administrative one.

    This is not wholly in the gift of universities. A careful consideration by government, funders, institutions, and researchers, of how flexibility should be used is the key to balance in the system. There are times where the research system requires stability. For example, the repeated use of fixed term contracts on the same topic is a clear market signal for more stable employment. Furthermore, it is undesirable to have a forever changing workforce in areas governments have singularly failed to make progress on for decades. Nobody is arguing that if only research into productivity was a bit more transient the UK’s economies woes could be fixed.

    The need is coordinated action. And unlike in Australia there is no single review of what the research ecosystem is for. Until then as priorities change, funders work on short time horizons, and institutions respond to ever changing incentives, the downstream effect is a workforce that will be treated as entirely changeable too.

    Source link