Tag: research

  • Research funding won’t redistribute itself

    Research funding won’t redistribute itself

    On the whole research funding is not configured to be sensitive to place.

    Redistribution

    It does good things in regions but this is different to funding being configured to do so. For example, universities in the North East performed strongly in the REF and as a consequence they received an uplift in QR funding. This will allow them to invest in their research capacity, this will bring agglomerate benefits in the North East, and go some small way to rebalancing the UK’s research ecosystem away from London.

    REF isn’t designed to do that. It has absolutely no interest where research takes place, just that the research that takes place is excellent. The UK isn’t a very big place and it has a large number of universities. Eventually, if you fund enough things in enough places you will eventually help support regional clusters of excellence.

    There are of course some specific place based funds but this doesn’t mean they are redistributive as well as being regionally focussed. The Higher Education Innovation Fund (HEIF) is focussed on regional capacity but it is £260m of a total annual Research England funding distribution of £2.8bn. HEIF is calculated using provider knowledge exchange work on businesses, public and third sector engagement, and the wider public. A large portion of the data is gathered through the HE-BCI Survey.

    The result of this is that there is place based funding but inevitably institutions with larger research capacities receive larger amounts of funding. Of the providers that received the maximum HEIF funding in 2024/25 five were within the golden triangle, one was in the West Midlands, one was in the East Midlands, two were in Yorkshire and the Humber, one was in the North West, and one was in the South East but not the golden triangle. It is regional but it is not redistributive.

    Strength of feeling/strength in places

    RAND Europe has released a process evaluation of wave two of the Strength in Places Fund (SIPF). As RAND Europe describe the fund is

    The Strength in Places Fund (SIPF) is a £312.5 million competitive funding scheme that takes a place-based approach to research and innovation (R&I) funding. SIPF is a UK Research and Innovation (UKRI) strategic fund managed by the SIPF delivery team based at Innovate UK and Research England. The aim of the Fund is to help areas of the UK build on existing strengths in R&I to deliver benefits for their local economy

    This fund has been more successful in achieving a more regionally distributed spread of funding. For example, the fund has delivered £47m to Wales compared to only £18m in South East England. Although quality was a key factor, and there are some challenges to how aligned projects are to wider regional priorities, it seems that a focus on a balanced portfolio made a difference. As RAND Europe note

    […]steps were taken to ensure a balanced portfolio in terms of geographical spread and sectors; however, quality was the primary factor influencing panel recommendations (INTXX). Panel members considered the projects that had been funded in Wave 1 and the bids submitted in Wave 2, and were keen on ensuring no one region was overrepresented. One interviewee mentioned that geographical variation of awards contributed to the credibility of a place-based funding system[…].

    The Regional Innovation Fund which aimed to support local innovation capacity was allocated with a specific modifier to account for where there had historically been less research investment. SPIF has been a different approach to solving the same conundrum of how best support research potential in every region of the UK.

    It’s within this context that it is interesting to arrive at UKRI’s most recent analysis of the geographical distribution of its funding in 2022/23 and 2023/24. There are two key messages the first is that

    All regions and nations received an increase in UKRI investment between the financial years 2021 to 2022 and 2023 to 2024. The greatest absolute increases in investment were seen in the North West, West Midlands and East Midlands. The greatest proportional increases were seen in Northern Ireland, the East Midlands and North West.

    And the second is that

    The percentage of UKRI funding invested outside London, the South East and East of England, collectively known as the ‘Greater South East’, rose to 50% in 2023 to 2024. This is up from 49% in the 2022 to 2023 financial year and 47% in the 2021 to 2022 financial year. This represents a cumulative additional £1.4 billion invested outside the Greater South East since the 2021 to 2022 financial year.

    Waterloo sunset?

    In the most literal sense the funding between the Greater South East and the rest of the country could not be more finely balanced. In flat cash terms the rest of the UK outside of the Greater South East has overtaken the Greater South East for the first time while investment per capita in the Greater South East still outstrips the rest of the country by a significant amount.

    The reasons for this shift is because of greater investments in the North West, West Midlands, and East Midlands who cumulatively saw an increase of £550m worth of funding over the past three years. The regions with the highest absolute levels of funding saw some of the smallest proportions of increases in investment.

    The evaluations and UKRI’s dataset present an interesting picture. There is nothing unusual about the way funding is distributed as it follows where the highest numbers of researchers, providers, and economic activity is located. It would be an entirely arbitrary mechanism which penalised the South East for having research strengths.

    Simultaneously, with constrained resources there are lots of latent assets outside of the golden triangle that will not get funding. The UK is unusually reliant on its capital as an economic contributor and research funding follows this. The only way to rebalance this is to make deliberate efforts, like with SIPF, to lean toward a more balanced portfolio of funding.

    This isn’t a plea to completely rip up the rule book, and a plea for more money in an era of fiscal constraint will not be listened to, but it does bring into sharp relief a choice. Either research policy is about bolstering the UK’s economic centre or it is about strengthening the potential of research where it receives less funding. There simply is not enough money to do both.

    Source link

  • Publishers Adopt AI Tools to Bolster Research Integrity

    Publishers Adopt AI Tools to Bolster Research Integrity

    The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publish—and peer review—research findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.

    Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journals—many have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.

    But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.

    “These AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,” Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. “AI tools can also be used to generate content and potentially increase research integrity risk. That’s why we’ve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.”

    However, most scholars aren’t yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60 percent said lack of guidelines and training keep them from using it in their work.

    In response, Wiley released new guidelines last week on “responsible and effective” uses of AI, aimed at deploying the technology to make the publishing process more efficient “while preserving the author’s authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,” according to a news release.

    Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates “precise summaries” to alleviate researchers’ challenges of “information overload, a shortage of time and the need for more effective ways to enhance existing knowledge,” according to a news release.

    Both of those announcements followed Springer Nature’s January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.

    “As the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,” Harsh Jegadeesan, Springer’s chief publishing officer, said in a news release. “By carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to them—conducting research.”

    ‘Obvious Financial Benefit’

    Academic publishing experts believe there are both advantages—and down sides—of involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.

    If use of AI assistants becomes the norm for peer reviewers, “the volume problem would be immediately gone from the industry” while creating an “obvious financial benefit” for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.

    But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could be—and may already be being—used to target terms or research lawmakers don’t like.

    “There are parts of peer review where a machine is definitely better than a human brain,” Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. “My concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents … That can become an element of censorship.”

    Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.

    “Just as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,” she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. “In the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.”

    But that doesn’t mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she “felt extra pressure to make sure that everything the author was reporting sounds doable” during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers “handling a large volume of submissions.”

    Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after “rigorous” testing.

    “The large language models are only as good as the information you give them,” she said. “We are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning … That’s the only way to get more successful and sustainable outcomes.”

    Not Equipped to Ensure Quality?

    Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “Anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing,” and “whether AI can do that effectively is a reasonable question.”

    But beyond that, the publishing industry’s embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.

    “The fact that publishers are now trumpeting the fact that they both are and need to be—according to them—using AI to fight paper mills and other bad actors is a bit of an admission they hadn’t been willing to make until recently: Their systems are not actually equipped to ensure quality,” Oransky said.

    “This is just more evidence that people are trying to shove far too much through the peer-review system,” he added. “That wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”

    Source link

  • Donors Support Grad Students Lacking Federal Research Funds

    Donors Support Grad Students Lacking Federal Research Funds

    Recent federal executive orders from President Donald Trump have put a halt to some university operations, including hiring and large swaths of academic research. The National Institutes of Health and the National Science Foundation, among others, have paused grant-review panels to comply with the orders and cut funding, leaving researchers in limbo.

    Graduate students often receive educational stipends from federal agencies for their research, putting their work—and their own degree attainment—at risk.

    To alleviate some hardships, the University of Hawaiʻi’s UH Foundation launched a Graduate Student Success Fund, which will provide direct relief for learners who have lost funding.

    Fewer than a dozen graduate students in the system have been impacted to various degrees to date, but “like most institutions, the extent of the possible impact is unknown,” a UH spokesperson said.

    On the ground: Michael Fernandez, a first-year UH Mānoa doctoral student in the botany program, is a participant in the National Science Foundation’s Graduate Research Fellowship Program, which supports learners pursuing research-based master’s or doctoral degrees in STEM education fields. The five-year fellowship includes three years of financial aid for tuition and fees and an annual stipend.

    “I and other fellows in the program feel uncertain about future funding from the fellowship,” Fernandez said in a press release. “This is especially concerning for me, as the NSF-GRFP is currently my primary and sole source of funding for my graduate studies.”

    University of Hawaiʻi president Wendy Hensel spurred the creation of the Graduate Student Success Fund for grad students at UH Mānoa and UH Hilo. The fund, supported by private donations, mirrors an undergraduate student success fund available to bachelor’s degree seekers who need help paying for tuition, books and fees.

    The UH Foundation will also support undergraduate researchers who may have had their work interrupted due to federal freezes.

    The Graduate Student Success Fund is designed to aid student retention and financial wellness and also support career development and future talent in Hawaiʻi.

    “It is critical that we do all we can to ensure that our university graduates, the next generation of talent, desperately needed for Hawaiʻi’s workforce,” Hensel said. “These graduate students are our scientists, doctors, nurses, psychologists, social workers, engineers, educators and leaders of tomorrow.”

    Details as to how funds will be distributed, including amounts and number of recipients, are still being determined, the spokesperson said.

    The bigger picture: Federally funded research projects that address diversity, equity, inclusion, gender, green energy or other alleged “far-left ideologies” have come under fire in recent weeks.

    In January Trump signed an executive order halting federal grant spending, which was later rescinded, but some organizations have halted funding regardless.

    Trump Administration Weaponizes Funding Against Institutions

    On March 7, the Trump administration announced it had canceled $400 million in federal grants and contracts to Columbia University for “the school’s continued inaction in the face of persistent harassment of Jewish students.” The federal government has also threatened to pull funding from any educational institution that invests in diversity, equity and inclusion programs.

    In February, the National Institutes of Health announced it would cut funding for indirect costs of conducting medical research, including hazardous waste disposal, utilities and patient safety. In 2024, the agency sent around $26 billion to over 500 grant recipients connected to institutions.

    Hensel published a memo in February opposing the cuts for reimbursement of facilities and administrative costs.

    “For UH, the impact of this decision cannot be overstated,” Hensel wrote. “The university is supported by 175 awards and subawards from the NIH with a current value of $211 million. NIH’s reduction of UH’s current negotiated [indirect compensation] rate of 56.5 percent at the JABSOM [UH Mānoa John A. Burns School of Medicine] and the [UH] Cancer Center alone will eliminate approximately $15 million in funding that UH uses to support its research programs, including ongoing clinical trials and debt service payments.”

    How is your college or university supporting students affected by federal action? Tell us more.

    Source link

  • Trump: Aus research must disclose vaccine, transgender, DEI or China ties

    Trump: Aus research must disclose vaccine, transgender, DEI or China ties

    US President Donald Trump in the Oval Office of the White House. Picture: Mandel Ngan

    Australian researchers who receive United States funding have been asked to disclose links to China and whether they agree with US President Donald Trump’s “two sexes” executive order.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • One Million Behind Bars Now Have Access to Academic Research Through JSTOR

    One Million Behind Bars Now Have Access to Academic Research Through JSTOR

    In a significant development for educational access in correctional facilities, the JSTOR Access in Prison (JAIP) program has reached a remarkable milestone, now serving over one million incarcerated learners across the United States. This achievement represents a doubling of the program’s reach in just over a year.

    The program, which provides incarcerated individuals with access to scholarly materials including academic journals, books, and research papers, crossed this threshold in December 2024. Two pivotal agreements helped fuel this expansion: a new partnership with the Federal Bureau of Prisons that introduced JSTOR to two federal facilities, and an expansion of an existing arrangement with the Arizona Department of Corrections, Rehabilitation, and Reentry (ADCRR).

    The ADCRR agreement is particularly noteworthy as it evolved from initially serving approximately 3,000 people enrolled in higher education programs to now reaching nearly 40,000 individuals in Arizona’s prison system, regardless of their educational enrollment status.

    “People in prisons use JSTOR the same way as people on the outside,” said Stacy Burnett, senior manager for the Access in Prison program. She explained that while many users pursue structured educational goals like degrees and certificates, others engage in self-directed learning, highlighting the diverse educational needs being met.

    The impact of this access extends far beyond traditional education. Users have reported that JSTOR has helped them build community connections, save money on research-related expenses, and gain new perspectives on their circumstances. In one remarkable case, research conducted through JSTOR led an incarcerated individual to request a health screening that ultimately saved that individual’s life.

    Some users have even leveraged their research to draft legislation supporting prison reentry programs, with one such proposal currently under consideration in North Carolina’s legislature.

    These success stories underscore the program’s value in developing academic research and analytical skills that can serve as important bridges to life after incarceration. “It’s a valuable reentry tool for civic engagement. It gets people to think more deeply,” Burnett explained.

    Since 2019, the program has seen dramatic growth, supported by grants from the Mellon Foundation and the Ascendium Education Group. Today, more than 95% of U.S. state and federal prison facilities provide access to JSTOR, with the program active in 24 countries worldwide.

    Building on this momentum, the JSTOR Access in Prison program has secured $800,000 in new funding commitments to support expansion into U.S. jails, which typically operate at local rather than state or federal levels.

    Despite the impressive one million user milestone, Burnett emphasizes that this represents just half of the incarcerated population in the United States and only 10% of those incarcerated globally. ITHAKA, JSTOR’s parent organization, has stated its ambition to eventually make educational resources available to all incarcerated individuals worldwide.

    As the program continues to grow, supporters add that it’s a powerful example of how access to educational resources can transform lives, even within the constraints of incarceration.

    Source link

  • Layoffs Gut Federal Education Research Agency

    Layoffs Gut Federal Education Research Agency

    Five years after the COVID-19 pandemic first forced schools and colleges into remote learning, researchers, policymakers and higher education leaders may no longer have access to the federal data they need to gather a complete picture of how those disruptions have affected a generation of students long term—or hold states and colleges accountable for the interventions they deployed to address the fallout.

    That’s because the National Center for Education Statistics, the Education Department’s data-collection arm that’s administered surveys and studies about the state of K-12, higher education and the workforce since 1867, is suddenly a shell of itself.

    As of this week, the NCES is down to five employees after the department fired nearly half its staff earlier this week. The broader Institute of Education Sciences, which houses NCES, also lost more than 100 employees as part of President Donald Trump’s campaign to eliminate alleged “waste, fraud and abuse” in federal funding.

    The mass firings come about a month after federal education data collection data took another big blow: In February, the department cut nearly $900 million in contracts at IES, which ended what some experts say was critical research into schools and fueled layoffs at some of the research firms that held those contracts, including MDRC, Mathematica, NORC and Westat.

    Although Trump and his allies have long blamed COVID-related learning loss on President Joe Biden’s approval of prolonged remote learning, numerous experts told Inside Higher Ed that without some of the federal data the NCES was collecting, it will be hard to draw definitive conclusions about those or any other claims about national education trends.

    ‘Backbone of Accountability’

    “The backbone of accountability for our school systems begins with simply collecting data on how well they’re doing. The fact that our capacity to do that is being undermined is really indefensible,” said Thomas Dee, a professor at Stanford University’s Graduate School of Education and research associate at the National Bureau of Economic Research. “One could conceive this as part of an agenda to undermine the very idea of truth and evidence in public education.”

    But the Education Department says its decision to nearly eliminate the NCES and so many IES contracts is rooted in what it claims are the agency’s own failures.

    “Despite spending hundreds of millions in taxpayer funds annually, IES has failed to effectively fulfill its mandate to identify best practices and new approaches that improve educational outcomes and close achievement gaps for students,” Madi Biedermann, deputy assistant secretary for communications at the department, said in an email to Inside Higher Ed Thursday.

    Biedermann said the department plans to restructure IES in the coming months in order to provide “states with more useful data to improve student outcomes while maintaining rigorous scientific integrity and cost effectiveness.”

    But many education researchers disagree with that characterization of IES and instead view it as an unmatched resource for informing higher education policy decisions.

    “Some of these surveys allow us to know if people are being successful in college. It tells us where those students are enrolled in college and where they came from. For example, COVID impacted everyone, but it had a disproportionate impact on specific regions in the U.S. and specific social and socioeconomic groups in the U.S.,” said Taylor Odle, an assistant professor of educational policy studies at the University of Wisconsin at Madison.

    “Post-COVID, states and regions have implemented a lot of interventions to help mitigate learning loss and accelerate learning for specific individuals. We’ll be able to know by comparing region to region or school to school whether or not those gaps increased or reduced in certain areas.”

    Without uniform federal data to ground comparisons of pandemic-related and other student success interventions, it will be harder to hold education policymakers accountable, Odle and others told Inside Higher Ed this week. However, Odle believes that may be the point of the Trump administration’s assault on the Education Department’s research arm.

    “It’s in essence a tacit statement that what they are doing may potentially be harmful to students and schools, and they don’t want the American public or researchers to be able to clearly show that,” he said. “By eliminating these surveys and data collection, and reducing staff at the Department of Education who collect, synthesize and report the data, every decision-maker—regardless of where they fall on the political spectrum—is going to be limited in the data and information they have access to.”

    Scope of Data Loss Unclear

    It’s not clear how many of the department’s dozens of data-collection programs—including those related to early childhood education, college student outcomes and workforce readiness—will be downsized or ended as a result of the cuts. The department did not respond to Inside Higher Ed’s request for clarity on exactly which contracts were canceled. (It did confirm, however, that it still maintains contracts for the National Assessment of Educational Progress, the College Scorecard and the Integrated Postsecondary Education Data System.)

    A now-fired longtime NCES employee who asked to remain anonymous out of fear of retaliation said they and others who worked on those data-collection programs for years are still in the dark on the future of many of the other studies IES administers.

    “We’ve been out of the loop on all these conversations about the state of these studies. That’s been taking place at a higher level—or outside of NCES entirely,” said the terminated employee. “What these federal sources do is synthesize all the different other data sources that already exist to provide a more comprehensive national picture in a way that saves researchers a lot of the trouble of having to combine these different sources themselves and match them up. It provides consistent methodologies.”

    Even if some of the data-collection programs continue, there will be hardly any NCES staff to help researchers and policymakers accurately navigate new or existing data, which was the primary function of most workers there.

    “We are a nonpartisan agency, so we’ve always shied away from interpreting or making value judgments about what the data say,” the fired NCES worker said. “We are basically a help desk and support resource for people who are trying to use this data in their own studies and their own projects.”

    ‘Jeopardizing’ Strong Workforce

    One widely used data set with an uncertain future is the Beginning Postsecondary Students Longitudinal Study—a detailed survey that has followed cohorts of first-time college students over a period of six to eight years since 1989. The latest iteration of the BPS survey has been underway since 2019, and it included questions meant to illuminate the long-term effects of pandemic-related learning loss. But like many other NCES studies, data collection for BPS has been on pause since last month, when the department pulled the survey’s contract with the Research Triangle Institute.

    In a blog post the Institute for Higher Education Policy published Wednesday, the organization noted that BPS is intertwined with the National Postsecondary Student Aid Study, which is a comprehensive nationwide study designed to determine how students and their families pay for college and demographic characteristics of those enrolled.

    The two studies “are the only federal data sources that provide comprehensive insights into how students manage college affordability, stay enrolled and engaged with campus resources, persist to completion, and transition to the workforce,” Taylor Myers, assistant director of research and policy, wrote. “Losing these critical data hinders policy improvements and limits our understanding of the realities students face.”

    That post came one day after IHEP sent members of Congress a letter signed by a coalition of 87 higher education organizations and individual researchers urging lawmakers to demand transparency about why the department slashed funding for postsecondary data collection.

    “These actions weaken our capacity to assess and improve educational and economic outcomes for students—directly jeopardizing our ability to build a globally competitive workforce,” the letter said. “Without these insights, policymakers will soon be forced to make decisions in the dark, unable to steward taxpayer dollars efficiently.”

    Picking Up the Slack

    But not every education researcher believes federal data is as vital to shaping education policy and evaluating interventions as IHEP’s letter claims.

    “It’s unclear that researchers analyzing those data have done anything to alter outcomes for students,” said Jay Greene, a senior research fellow in the Center for Education Policy at the right-wing Heritage Foundation. “Me being able to publish articles is not the same thing as students benefiting. We have this assumption that research should prove things, but in the world of education, we have very little evidence of that.”

    Greene, who previously worked as a professor of education policy at the University of Arkansas, said he never used federal data in his assessments of educational interventions and instead used state-level data or collected his own. “Because states and localities actually run schools, they’re in a position to do things that might make it better or worse,” he said. “Federal data is just sampling … It’s not particularly useful for causal research designs to develop practices and interventions that improve education outcomes.”

    Other researchers have a more measured view of what needs to change in federal education data collection.

    Robin Lake, director of the Center on Reinventing Public Education at Arizona State University, has previously called for reforms at IES, arguing that some of the studies are too expensive without enough focus on educators’ evolving priorities, which as of late include literacy, mathematics and how to handle the rise of artificial intelligence.

    But taking a sledgehammer to NCES isn’t the reform she had in mind. Moreover, she said blaming federal education data collections and researchers for poor education outcomes is “completely ridiculous.”

    “There’s a breakdown between knowledge and practice in the education world,” Lake said. “We don’t adopt things that work at the scale we need to, but that’s not on researchers or the quality of research that’s being produced.”

    But just because federal education data collection may not focus on specific interventions, “that doesn’t mean those data sets aren’t useful,” said Christina Whitfield, senior vice president and chief of staff for the State Higher Education Executive Officers Association.

    “A lot of states have really robust data systems, and in a lot of cases they provide more detail than the federal data systems do,” she said. “However, one of the things the federal data provides is a shared language and common set of definitions … If we move toward every state defining these key elements individually or separately, we lose a lot of comparability.”

    If many of the federal data collection projects aren’t revived, Whitfield said other entities, including nonprofits and corporations, will likely step in to fill the void. But that likely won’t be a seamless transition without consequence.

    “At least in the short term, there’s going to be a real issue of how to vet those different solutions and determine which is the highest-quality, efficient and most useful response to the information vacuum we’re going to experience,” Whitfield said. And even if there’s just a pause on some of the data collections and federal contracts are able to resume eventually, “there’s going to be a gap and a real loss in the continuity of that data and how well you can look back longitudinally.”

    Source link

  • Three Ways Faculty Are Using AI to Lighten Their Professional Load

    Three Ways Faculty Are Using AI to Lighten Their Professional Load

    Reading Time: 4 minutes

    Our most recent research into the working lives of faculty gave us some interesting takeaways about higher education’s relationship with AI. While every faculty member’s thoughts about AI differ and no two experiences are the same, the general trend we’ve seen is that faculty have moved from fear to acceptance. A good deal of faculty were initially concerned about AI’s arrival on campus. This concern was amplified by a perceived rise in AI-enabled cheating and plagiarism among students. Despite that, many faculty have come to accept that AI is here to stay. Some have developed working strategies to ensure that they and their students know the boundaries of AI usage in the classroom.

    Early-adopting educators aren’t just navigating around AI. They have embraced and integrated it into their working lives. Some have learned to use AI tools to save time and make their working lives easier. In fact, over half of instructors reported that they wanted to use AI for administrative tasks and 10% were already doing so. (Find the highlights here.) As more faculty are seeing the potential in AI, that number has likely risen. So, in what ways are faculty already using AI to lighten the load of professional life? Here are three use-cases we learned about from education professionals:

    1. AI to jumpstart ideas and conversations

    “Give me a list of 10 German pop songs that contain irregular verbs.”

    “Summarize the five most contentious legal battles happening in U.S. media law today.”

    “Create a set of flashcards that review the diagnostic procedure and standard treatment protocol for asthma.”

    The possibilities (and the prompts!) are endless. AI is well-placed to assist with idea generation, conversation-starters and lesson materials for educators on any topic. It’s worth noting that AI tends to prove most helpful as a starting point for teaching and learning fodder, rather than for providing fully-baked responses and ideas. Those who expect the latter may be disappointed, as the quality of AI results can vary widely depending on the topic. Educators can and should, of course, always be the final determinants and reviewers of the accuracy of anything shared in class.

    1. AI to differentiate instruction

    Faculty have told us that they spend a hefty proportion (around 28%) of their time on course preparation. Differentiating instruction for the various learning styles and levels in any given class constitutes a big part of that prep work. A particular lesson may land well with a struggling student, but might feel monotonous for an advanced student who has already mastered the material. To that end, some faculty are using AI to readily differentiate lesson plans. For example, an English literature instructor might enter a prompt like, “I need two versions of a lesson plan about ‘The Canterbury Tales;’ one for fluent English speakers and one for emergent English speakers.” This simple step can save faculty hours of manual lesson plan differentiation.

    An instructor in Kansas shared with Cengage their plans to let AI help in this area, “I plan to use AI to evaluate students’ knowledge levels and learning abilities and create personalized training content. For example, AI will assess all the students at the beginning of the semester and divide them into ‘math-strong’ and ‘math-weak’ groups based on their mathematical aptitude, and then automatically assign math-related materials, readings and lecture notes to help the ‘math-weak’ students.”

    When used in this way, AI can be a powerful tool that gives students of all backgrounds an equal edge in understanding and retaining difficult information.

    1. AI to provide feedback

    Reviewing the work of dozens or hundreds of students and finding common threads and weak spots is tedious work, and seems an obvious area for a little algorithmic assistance.

    Again, faculty should remain in control of the feedback they provide to students. After all, students fully expect faculty members to review and critique their work authentically. However, using AI to more deeply understand areas where a student’s logic may be consistently flawed, or types of work on which they repeatedly make mistakes, can be a game-changer, both for educators and students.

    An instructor in Iowa told Cengage, “I don’t want to automate my feedback completely, but having AI suggest areas of exigence in students’ work, or supply me with feedback options based on my own past feedback, could be useful.”

    Some faculty may even choose to have students ask AI for feedback themselves as part of a critical thinking or review exercise. Ethan and Lilach Mollick of the Wharton School of the University of Pennsylvania share in an Harvard Business Publishing Education article, “Though AI-generated feedback cannot replicate the grounded knowledge that teachers have about their students, it can be given quickly and at scale and it can help students consider their work from an outside perspective. Students can then evaluate the feedback, decide what they want to incorporate, and continue to iterate on their drafts.”

    AI is not a “fix-all” for the administrative side of higher education. However, many faculty members are gaining an advantage and getting some time back by using it as something of a virtual assistant.

     

    Are you using AI in the classroom?

    In a future piece, we’ll share 3 more ways in which faculty are using AI to make their working lives easier. In the meantime, you can fully explore our research here:

     

     

     

    Source link

  • DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74

    DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    When teens and young adults with disabilities in California’s Poway Unified School District heard about a new opportunity to get extra help planning for life after high school, nearly every eligible student signed up.

    The program, known as Charting My Path for Future Success, aimed to fill a major gap in education research about what kinds of support give students nearing graduation the best shot at living independently, finding work, or continuing their studies.

    Students with disabilities finish college at much lower rates than their non-disabled peers, and often struggle to tap into state employment programs for adults with disabilities, said Stacey McCrath-Smith, a director of special education at Poway Unified, which had 135 students participating in the program. So the extra help, which included learning how to track goals on a tool designed for high schoolers with disabilities, was much needed.

    Charting My Path launched earlier this school year in Poway Unified and 12 other school districts. The salaries of 61 school staff nationwide, and the training they received to work with nearly 1,100 high schoolers with disabilities for a year and a half, was paid for by the U.S. Department of Education.

    Jessie Damroth’s 17-year-old son Logan, who has autism, attention deficit hyperactivity disorder, and other medical needs, had attended classes and met with his mentor through the program at Newton Public Schools in Massachusetts for a month. For the first time, he was talking excitedly about career options in science and what he might study at college.

    “He was starting to talk about what his path would look like,” Damroth said. “It was exciting to hear him get really excited about these opportunities. … He needed that extra support to really reinforce that he could do this.”

    Then the Trump administration pulled the plug.

    Charting My Path was among more than 200 Education Department contracts and grants terminated over the last two weeks by the Trump administration’s U.S. DOGE Service. DOGE has slashed spending it deemed to be wasteful, fraudulent, or in service of diversity, equity, inclusion, and accessibility goals that President Donald Trump has sought to ban. But in several instances, the decision to cancel contracts affected more than researchers analyzing data in their offices — it affected students.

    Many projects, like Charting My Path, involved training teachers in new methods, testing learning materials in actual classrooms, and helping school systems use data more effectively.

    “Students were going to learn really how to set goals and track progress themselves, rather than having it be done for them,” McCrath-Smith said. “That is the skill that they will need post-high school when there’s not a teacher around.”

    All of that work was abruptly halted — in some cases with nearly finished results that now cannot be distributed.

    Every administration is entitled to set its own priorities, and contracts can be canceled or changed, said Steven Fleischman, an education consultant who for many years ran one of the regional research programs that was terminated. He compared it to a homeowner deciding they no longer want a deck as part of their remodel.

    But the current approach reminds him more of construction projects started and then abandoned during the Great Recession, in some cases leaving giant holes that sat for years.

    “You can walk around and say, ‘Oh, that was a building we never finished because the funds got cut off,’” he said.

    DOGE drives cuts to education research contracts, grants

    The Education Department has been a prime target of DOGE, the chaotic cost-cutting initiative led by billionaire Elon Musk, now a senior adviser to Trump.

    So far, DOGE has halted 89 education projects, many of which were under the purview of the Institute of Education Sciences, the ostensibly independent research arm of the Education Department. The administration said those cuts, which included multi-year contracts, totaled $881 million. In recent years, the federal government has spent just over $800 million on the entire IES budget.

    DOGE has also shut down 10 regional labs that conduct research for states and local schools and shuttered four equity assistance centers that help with teacher training. The Trump administration also cut off funding for nearly 100 teacher training grants and 18 grants for centers that often work to improve instruction for struggling students.

    The total savings is up for debate. The Trump administration said the terminated Education Department contracts and grants were worth $2 billion. But some were near completion with most of the money already spent.

    An NPR analysis of all of DOGE’s reported savings found that it likely was around $2 billion for the entire federal government — though the Education Department is a top contributor.

    On Friday, a federal judge issued an injunction that temporarily blocks the Trump administration from canceling additional contracts and grants that might violate the anti-DEIA executive order. It’s not clear whether the injunction would prevent more contracts from being canceled “for convenience.”

    Mark Schneider, the recent past IES director, said the sweeping cuts represent an opportunity to overhaul a bloated education research establishment. But even many conservative critics have expressed alarm at how wide-ranging and indiscriminate the cuts have been. Congress mandated many of the terminated programs, which also indirectly support state and privately funded research.

    The canceled projects include contracts that support maintenance of the Common Core of Data, a major database used by policymakers, researchers, and journalists, as well as work that supports updates to the What Works Clearinghouse, a huge repository of evidence-based practices available to educators for free.

    And after promising not to make any cuts to the National Assessment of Educational Progress, known as the nation’s report card, the department canceled an upcoming test for 17-year-olds that helps researchers understand long-term trends. On Monday, Peggy Carr, the head of the National Center for Education Statistics, which oversees NAEP, was placed on leave.

    The Education Department did not respond to questions about who decided which programs to cut and what criteria were used. Nor did the department respond to a specific question about why Charting My Path was eliminated. DOGE records estimate the administration saved $22 million by terminating the program early, less than half the $54 million in the original contract.

    The decision has caused mid-year disruptions and uncertainty.

    In Utah, the Canyons School District is trying to reassign the school counselor and three teachers whose salaries were covered by the Charting My Path contract.

    The district, which had 88 high schoolers participating in the program, is hoping to keep using the curriculum to boost its usual services, said Kirsten Stewart, a district spokesperson.

    Officials in Poway Unified, too, hope schools can use the curriculum and tools to keep up a version of the program. But that will take time and work because the program’s four teachers had to be reassigned to other jobs.

    “They dedicated that time and got really important training,” McCrath-Smith said. “We don’t want to see that squandered.”

    For Damroth, the loss of parent support meetings through Charting My Path was especially devastating. Logan has a rare genetic mutation that causes him to fall asleep easily during the day, so Damroth wanted help navigating which colleges might be able to offer extra scheduling support.

    “I have a million questions about this. Instead of just hearing ‘I don’t know’ I was really looking forward to working with Joe and the program,” she said, referring to Logan’s former mentor. “It’s just heartbreaking. I feel like this wasn’t well thought out. … My child wants to do things in life, but he needs to be given the tools to achieve those goals and those dreams that he has.”

    DOGE cuts labs that helped ‘Mississippi Miracle’ in reading

    The dramatic improvement in reading proficiency that Carey Wright oversaw as state superintendent in one the nation’s poorest states became known as the “Mississippi Miracle.”

    Regional Educational Laboratory Southeast, based out of the Florida Center for Reading Research at Florida State University, was a key partner in that work, Wright said.

    When Wright wondered if state-funded instructional coaches were really making a difference, REL Southeast dispatched a team to observe, videotape, and analyze the instruction delivered by hundreds of elementary teachers across the state. Researchers reported that teachers’ instructional practices aligned well with the science of reading and that teachers themselves said they felt far more knowledgeable about teaching reading.

    “That solidified for me that the money that we were putting into professional learning was working,” Wright said.

    The study, she noted, arose from a casual conversation with researchers at REL Southeast: “That’s the kind of give and take that the RELs had with the states.”

    Wright, now Maryland state superintendent, said she was looking forward to partnering with REL Mid-Atlantic on a math initiative and on an overhaul of the school accountability system.

    But this month, termination letters went out to the universities and research organizations that run the 10 Regional Educational Laboratories, which were established by Congress in 1965 to serve states and school districts. The letters said the contracts were being terminated “for convenience.”

    The press release that went to news organizations cited “wasteful and ideologically driven spending” and named a single project in Ohio that involved equity audits as a part of an effort to reduce suspensions. Most of the REL projects on the IES website involve reading, math, career connections, and teacher retention.

    Jannelle Kubinec, CEO of WestEd, an education research organization that held the contracts for REL West and REL Northwest, said she never received a complaint or a request to review the contracts before receiving termination letters. Her team had to abruptly cancel meetings to go over results with school districts. In other cases, reports are nearly finished but cannot be distributed because they haven’t gone through the review process.

    REL West was also working with the Utah State Board of Education to figure out if the legislature’s investment in programs to keep early career teachers from leaving the classroom was making a difference, among several other projects.

    “This is good work and we are trying to think through our options,” she said. “But the cancellation does limit our ability to finish the work.”

    Given enough time, Utah should be able to find a staffer to analyze the data collected by REL West, said Sharon Turner, a spokesperson for the Utah State Board of Education. But the findings are much less likely to be shared with other states.

    The most recent contracts started in 2022 and were set to run through 2027.

    The Trump administration said it planned to enter into new contracts for the RELs to satisfy “statutory requirements” and better serve schools and states, though it’s unclear what that will entail.

    “The states drive the research agendas of the RELs,” said Sara Schapiro, the executive director of the Alliance for Learning Innovation, a coalition that advocates for more effective education research. If the federal government dictates what RELs can do, “it runs counter to the whole argument that they want the states to be leading the way on education.”

    Some terminated federal education research was nearly complete

    Some research efforts were nearly complete when they got shut down, raising questions about how efficient these cuts were.

    The American Institutes for Research, for example, was almost done evaluating the impact of the Comprehensive Literacy State Development program, which aims to improve literacy instruction through investments like new curriculum and teacher training.

    AIR’s research spanned 114 elementary schools across 11 states and involved more than 23,000 third, fourth, and fifth graders and their nearly 900 reading teachers.

    Researchers had collected and analyzed a massive trove of data from the randomized trial and presented their findings to federal education officials just three days before the study was terminated.

    “It was a very exciting meeting,” said Mike Garet, a vice president and institute fellow at AIR who oversaw the study. “People were very enthusiastic about the report.”

    Another AIR study that was nearing completion looked at the use of multi-tiered systems of support for reading among first and second graders. It’s a strategy that helps schools identify and provide support to struggling readers, with the most intensive help going to kids with the highest needs. It’s widely used by schools, but its effectiveness hasn’t been tested on a larger scale.

    The research took place in 106 schools and involved over 1,200 educators and 5,700 children who started first grade in 2021 and 2022. Much of the funding for the study went toward paying for teacher training and coaching to roll out the program over three years. All of the data was collected and nearly done being analyzed when DOGE made its cuts.

    Garet doesn’t think he and his team should simply walk away from unfinished work.

    “If we can’t report results, that would violate our covenant with the districts, the teachers, the parents, and the students who devoted a lot of time in the hope of generating knowledge about what works,” Garet said. “Now that we have the data and have the results, I think we’re duty-bound to report them.”

    This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at ckbe.at/newsletters.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • AI Support for Teachers

    AI Support for Teachers

    Collaborative Classroom, a leading nonprofit publisher of K–12 instructional materials, announces the publication of SIPPS, a systematic decoding program. Now in a new fifth edition, this research-based program accelerates mastery of vital foundational reading skills for both new and striving readers.

    Twenty-Five Years of Transforming Literacy Outcomes

    “As educators, we know the ability to read proficiently is one of the strongest predictors of academic and life success,” said Kelly Stuart, President and CEO of Collaborative Classroom. “Third-party studies have proven the power of SIPPS. This program has a 25-year track record of transforming literacy outcomes for students of all ages, whether they are kindergarteners learning to read or high schoolers struggling with persistent gaps in their foundational skills.

    “By accelerating students’ mastery of foundational skills and empowering teachers with the tools and learning to deliver effective, evidence-aligned instruction, SIPPS makes a lasting impact.”

    What Makes SIPPS Effective?

    Aligned with the science of reading, SIPPS provides explicit, systematic instruction in phonological awareness, spelling-sound correspondences, and high-frequency words. 

    Through differentiated small-group instruction tailored to students’ specific needs, SIPPS ensures every student receives the necessary targeted support—making the most of every instructional minute—to achieve grade-level reading success.

    SIPPS is uniquely effective because it accelerates foundational skills through its mastery-based and small-group targeted instructional design,” said Linda Diamond, author of the Teaching Reading Sourcebook. “Grounded in the research on explicit instruction, SIPPS provides ample practice, active engagement, and frequent response opportunities, all validated as essential for initial learning and retention of learning.”

    Personalized, AI-Powered Teacher Support

    Educators using SIPPS Fifth Edition have access to a brand-new feature: immediate, personalized responses to their implementation questions with CC AI Assistant, a generative AI-powered chatbot.

    Exclusively trained on Collaborative Classroom’s intellectual content and proprietary program data, CC AI Assistant provides accurate, reliable information for educators.

    Other Key Features of SIPPS, Fifth Edition

    • Tailored Placement and Progress Assessments: A quick, 3–8 minute placement assessment ensures each student starts exactly at their point of instructional need. Ongoing assessments help monitor progress, adjust pacing, and support grouping decisions.
    • Differentiated Small-Group Instruction: SIPPS maximizes instructional time by focusing on small groups of students with similar needs, ensuring targeted, effective teaching.
    • Supportive of Multilingual Learners: Best practices in multilingual learner (ML) instruction and English language development strategies are integrated into the design of SIPPS.
    • Engaging and Effective for Older Readers: SIPPS Plus and SIPPS Challenge Level are specifically designed for students in grades 4–12, offering age-appropriate texts and instruction to close lingering foundational skill gaps.
    • Multimodal Supports: Integrated visual, auditory, and kinesthetic-tactile strategies help all learners, including multilingual students.
    • Flexible, Adaptable, and Easy to Teach: Highly supportive for teachers, tutors, and other adults working in classrooms and expanded learning settings, SIPPS is easy to implement well. A wraparound system of professional learning support ensures success for every implementer.

    Accelerating Reading Success for Students of All Ages

    In small-group settings, students actively engage in routines that reinforce phonics and decoding strategies, practice with aligned texts, and receive immediate feedback—all of which contribute to measurable gains.

    “With SIPPS, students get the tools needed to read, write, and understand text that’s tailored to their specific abilities,” said Desiree Torres, ENL teacher and 6th Grade Team Lead at Dr. Richard Izquierdo Health and Science Charter School in New York. “The boost to their self-esteem when we conference about their exam results is priceless. Each and every student improves with the SIPPS program.” 

    Kevin Hogan
    Latest posts by Kevin Hogan (see all)

    Source link

  • The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    In December, The Wall Street Journal reported:

    [President-elect Donald Trump’s nominee to lead the National Institutes of Health] Dr. Jay Bhattacharya […] is considering a plan to link a university’s likelihood of receiving research grants to some ranking or measure of academic freedom on campus, people familiar with his thinking said. […] He isn’t yet sure how to measure academic freedom, but he has looked at how a nonprofit called Foundation for Individual Rights in Education scores universities in its freedom-of-speech rankings, a person familiar with his thinking said.

    We believe in and stand by the importance of the College Free Speech Rankings. More attention to the deleterious effect restrictions on free speech and academic freedom have on research at our universities is desperately needed, so hearing that they are being considered as a guidepost for NIH grantmaking is heartening. Dr. Bhattacharya’s own right to academic freedom was challenged by his Stanford University colleagues, so his concerns about its effect on NIH’s grants is understandable.

    However, our College Free Speech Rankings are not the right tool for this particular job. They were designed with a specific purpose in mind — to help students and parents find campuses where students are both free and comfortable expressing themselves. They were not intended to evaluate the climate for conducting academic research on individual campuses and are a bad fit for that purpose. 

    While the rankings assess speech codes that apply to students, the rankings do not currently assess policies pertaining to the academic freedom rights and research conduct of professors, who are the primary recipients of NIH grants. Nor do the rankings assess faculty sentiment about their campus climates. It would be a mistake to use the rankings beyond their intended purpose — and, if the rankings were used to deny funding for important research that would in fact be properly conducted, that mistake would be extremely costly.

    FIRE instead proposes three ways that would be more appropriate for NIH to use its considerable power to improve academic freedom on campus and ensure research is conducted in an environment most conducive to finding the most accurate results.

    1. Use grant agreements to safeguard academic freedom as a strong contractual right. 
    2. Encourage open data practices to promote research integrity.
    3. Incentivize universities to study their campus climates for academic freedom.

    Why should the National Institutes of Health care about academic freedom at all?

    The pursuit of truth demands that researchers be able to follow the science wherever it leads, without fear, favor, or external interference. To ensure that is the case, NIH has a strong interest in ensuring academic freedom rights are inviolable. 

    As a steward of considerable taxpayer money, NIH has an obligation to ensure it spends its funds on high-quality research free from censorship or other interference from politicians or college and university administrators.

    Why the National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to decide where to send funds

    FIRE’s College Free Speech Rankings (CFSR) were never intended for use in determining research spending. As such, it has a number of design features that make it ill-suited to that purpose, either in its totality or through its constituent parts.

    Firstly, like the U.S. News & World Report college rankings, a key reason for the creation of the CFSRs was to provide information to prospective undergraduate students and their parents. As such, it heavily emphasizes students’ perceptions of the campus climate over the perceptions of faculty or researchers. In line with that student focus, our attitude and climate components are based on a survey of undergraduates. Additionally, the speech policies that we evaluate and incorporate into the rankings are those that affect students. We do not evaluate policies that affect faculty and researchers, which are often different and would be of greater relevance to deciding research funding. While it makes sense that there may be some correlation, we have no way of knowing whether or the degree to which that might be true.

    Secondly, for the component that most directly implicates the academic freedom of faculty, we penalize schools for attempts to sanction scholars for their protected speech, as tracked in our Scholars Under Fire database. While our Scholars Under Fire database provides excellent datapoints for understanding the climate at a university, it does not function as a systematic proxy for assessing academic freedom on a given campus as a whole. As one example, a university with relatively strong protection for academic freedom may have vocal professors with unpopular viewpoints that draw condemnation and calls for sanction that could hurt its ranking, while a climate where professors feel too afraid to voice controversial opinions could draw relatively few calls for sanction and thus enjoy a higher ranking. This shortcoming is mitigated when considered alongside the rest of our rankings components, but as discussed above, those other components mostly concern students rather than faculty.

    Thirdly, using CFSR to determine NIH funding could — counterintuitively — be abused by vigilante censors. Because we penalize schools for attempted and successful shoutdowns, the possibility of a loss of NIH funding could incentivize activists who want leverage over a university to disrupt as many events as possible in order to negatively influence its ranking, and thus its funding prospects. Even the threat of disruption could thus give censors undue power over a university administration that fears loss of funding.

    Finally, due to resource limitations, we do not rank all research universities. It would not be fair to deny funding to an unranked university or to fund an unranked university with a poor speech climate over a low-ranked university.

    Legal boundaries for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    While NIH has considerable latitude to determine how it spends taxpayer money, as an arm of the government, the First Amendment places restrictions on how NIH may use that power. Notably, any solution must not penalize institutions for protected speech or scholarship by students or faculty unrelated to NIH granted projects. NIH could not, for example, require that a university quash protected protests as a criteria for eligibility, or deny a university eligibility because of controversial research undertaken by a scholar who does not work on NIH-funded research.

    While NIH can (and effectively must) consider the content of applications in determining what to fund, eligibility must be open to all regardless of viewpoint. Even were this not the case as a constitutional matter (and it is, very much so), it is important as a prudential matter. People would be understandably skeptical of, if not downright disbelieve, scientific results obtained through a grant process with an obvious ideological filter. Indeed, that is the root of much of the current skepticism over federally funded science, and the exact situation academic freedom is intended to avoid.

    Additionally, NIH cannot impose a political litmus test on an individual or an institution, or compel an institution or individual to take a position on political or scientific issues as a condition of grant funding.

    In other words, any solution to improve academic freedom:

    • Must be viewpoint neutral;
    • Must not impose an ideological or political litmus test; and
    • Must not penalize an institution for protected speech or scholarship by its scholars or students.

    Guidelines for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    NIH should carefully tailor any solution to directly enhance academic freedom and to further NIH’s goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science.” Going beyond that purpose to touch on issues and policies that don’t directly affect the conduct of NIH grant-funded research may leave such a policy vulnerable to legal challenge.

    Any solution should, similarly, avoid using vague or politicized terms such as “wokeness” or “diversity, equity, and inclusion.” Doing so creates needless skepticism of the process and — as FIRE knows all too well — introduces uncertainty as professors and institutions parse what is and isn’t allowed.

    Enforcement mechanisms should be a function of contractual promises of academic freedom, rather than left to apathetic accreditors or the unbounded whims of bureaucrats on campus or officials in government, for several reasons. 

    Regarding accreditors, FIRE over the years has reported many violations of academic freedom to accreditors who require institutions to uphold academic freedom as a precondition for their accreditation. Up to now, the accreditors FIRE has contacted have shown themselves wholly uninterested in enforcing their academic freedom requirements.

    When it comes to administrators, FIRE has documented countless examples of campus administrators violating academic freedom, either due to politics, or because they put the rights of the professor second to the perceived interests of their institution.

    As for government actors, we have seen priorities and politics shift dramatically from one administration to the next. It would be best for everyone involved if NIH funding did not ping-pong between ideological poles as a function of each presidential election, as the Title IX regulations now do. Dramatic changes to how NIH conceives as academic freedom with every new political administration would only create uncertainty that is sure to further chill speech and research.

    While the courts have been decidedly imperfect protectors of academic freedom, they have a better record than accreditors, administrators, or partisan government officials in parsing protected conduct from unprotected conduct. And that will likely be even more true with a strong, unambiguous contractual promise of academic freedom. Speaking of which…

    The National Institutes of Health should condition grants of research funds on recipient institutions adopting a strong contractual promise of academic freedom for their faculty and researchers

    The most impactful change NIH could enact would be to require as a condition of eligibility that institutions adopt strong academic freedom commitments, such as the 1940 Statement of Principles on Academic Freedom and Tenure or similar, and make those commitments explicitly enforceable as a contractual right for their faculty members and researchers.

    The status quo for academic freedom is one where nearly every institution of higher education makes promises of academic freedom and freedom of expression to its students and faculty. Yet only at public universities, where the First Amendment applies, are these promises construed with any consistency as an enforceable legal right. 

    Private universities, when sued for violating their promises of free speech and academic freedom, frequently argue that those promises are purely aspirational and that they are not bound by them (often at the same time that they argue faculty and students are bound by the policies). 

    Too often, courts accept this and universities prevail despite the obvious hypocrisy. NIH could stop private universities’ attempts to have their cake and eat it too by requiring them to legally stand by the promises of academic freedom that they so readily abandon when it suits them.

    NIH could additionally require that this contractual promise come with standard due process protections for those filing grievances at their institution, including:

    • The right to bring an academic freedom grievance before an objective panel;
    • The right to present evidence;
    • The right to speedy resolution;
    • The right to written explanation of findings including facts and reasons; and
    • The right to appeal.

    If the professor exhausts these options, they may sue for breach of the contract. To reduce the burden of litigation, NIH could require that, if a faculty member prevails in a lawsuit over a violation of academic freedom, the violating institution would not be eligible for future NIH funding until they pay the legal fees of the aggrieved faculty member.

    NIH could also study violations of academic freedom by creating a system for those connected to NIH-funded research to report violations of academic freedom or scientific integrity.

    It would further be proper for NIH to require institutions to eliminate any political litmus tests, such as mandatory DEI statements, as a condition of grant eligibility.

    The National Institutes of Health can implement strong measures to protect transparency and integrity in science

    NIH could encourage open science and transparency principles by heavily favoring studies that are pre-registered. Additionally, to obviate concerns that scientific results may be suppressed or buried because they are unpopular or politically inconvenient, NIH could require its grant-funded research to make available data (with proper privacy safeguards) following the completion of the project. 

    To help deal with the perverse incentives that have created the replication crisis and undermined public trust in science, NIH could create impactful incentives for work on replications and the publication of null results.

    Finally, NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.

    The National Institutes of Health can incentivize study into campus climates for academic freedom

    As noted before, FIRE’s College Free Speech Rankings focus on students. Due to logistical and resource difficulties surveying faculty, our 2024 Faculty Report looking into many of the same issues took much longer and had to be limited in scope to 55 campuses, compared to the 250+ in the CFSR. This is to say there is a strong need for research to understand faculty views and experiences on academic freedom. After all, we cannot solve a problem until we understand it. To that effect, NIH should incentivize further study into faculty’s academic freedom.

    It is important to note that these studies should be informational and not used in a punitive manner, or to decide on NIH funding eligibility. This is because tying something as important as NIH funding to the results of the survey would create so significant an incentive to influence the results that the data would be impossible to trust. Even putting aside malicious interference by administrators and other faculty members, few faculty would be likely to give honest answers that imperiled institutional funding, knowing the resulting loss in funding might threaten their own jobs.

    Efforts to do these kinds of surveys in Wisconsin and Florida proved politically controversial, and at least initially, led to boycotts, which threatened to compromise the quality and reliability of the data. As such, it’s critical that any such survey be carried out in a way that maximizes trust, under the following principles:

    • Ideally, the administration of these surveys should be done by an unbiased third party — not the schools themselves, or NIH. This third party should include respected researchers across the political spectrum and no partisan slant.
    • The survey sample must be randomized and not opt-in.
    • The questionnaire must be made public beforehand, and every effort should be made for the questions to be worded without any overt partisanship or ideology that would reduce trust.

    Conclusion: With great power…

    FIRE has for the last two decades been America’s premier defender of free speech and academic freedom on campus. Following Frederick Douglass’s wise dictum, “I would unite with anybody to do right and with nobody to do wrong,” we’ve worked with Democrats, Republicans, and everyone in between (and beyond) to advance free speech and open inquiry, and we’ve criticized them in turn whenever they’ve threatened these values.

    With that sense of both opportunity and caution, we would be heartened if NIH used its considerable power wisely in an effort to improve scientific integrity and academic freedom. But if wielded recklessly, that same considerable power threatens to do immense damage to science in the process. 

    We stand ready to advise if called upon, but integrity demands that we correct the record if we believe our data is being used for a purpose to which it isn’t suited.

    Source link