Tag: research

  • Layoffs Gut Federal Education Research Agency

    Layoffs Gut Federal Education Research Agency

    Five years after the COVID-19 pandemic first forced schools and colleges into remote learning, researchers, policymakers and higher education leaders may no longer have access to the federal data they need to gather a complete picture of how those disruptions have affected a generation of students long term—or hold states and colleges accountable for the interventions they deployed to address the fallout.

    That’s because the National Center for Education Statistics, the Education Department’s data-collection arm that’s administered surveys and studies about the state of K-12, higher education and the workforce since 1867, is suddenly a shell of itself.

    As of this week, the NCES is down to five employees after the department fired nearly half its staff earlier this week. The broader Institute of Education Sciences, which houses NCES, also lost more than 100 employees as part of President Donald Trump’s campaign to eliminate alleged “waste, fraud and abuse” in federal funding.

    The mass firings come about a month after federal education data collection data took another big blow: In February, the department cut nearly $900 million in contracts at IES, which ended what some experts say was critical research into schools and fueled layoffs at some of the research firms that held those contracts, including MDRC, Mathematica, NORC and Westat.

    Although Trump and his allies have long blamed COVID-related learning loss on President Joe Biden’s approval of prolonged remote learning, numerous experts told Inside Higher Ed that without some of the federal data the NCES was collecting, it will be hard to draw definitive conclusions about those or any other claims about national education trends.

    ‘Backbone of Accountability’

    “The backbone of accountability for our school systems begins with simply collecting data on how well they’re doing. The fact that our capacity to do that is being undermined is really indefensible,” said Thomas Dee, a professor at Stanford University’s Graduate School of Education and research associate at the National Bureau of Economic Research. “One could conceive this as part of an agenda to undermine the very idea of truth and evidence in public education.”

    But the Education Department says its decision to nearly eliminate the NCES and so many IES contracts is rooted in what it claims are the agency’s own failures.

    “Despite spending hundreds of millions in taxpayer funds annually, IES has failed to effectively fulfill its mandate to identify best practices and new approaches that improve educational outcomes and close achievement gaps for students,” Madi Biedermann, deputy assistant secretary for communications at the department, said in an email to Inside Higher Ed Thursday.

    Biedermann said the department plans to restructure IES in the coming months in order to provide “states with more useful data to improve student outcomes while maintaining rigorous scientific integrity and cost effectiveness.”

    But many education researchers disagree with that characterization of IES and instead view it as an unmatched resource for informing higher education policy decisions.

    “Some of these surveys allow us to know if people are being successful in college. It tells us where those students are enrolled in college and where they came from. For example, COVID impacted everyone, but it had a disproportionate impact on specific regions in the U.S. and specific social and socioeconomic groups in the U.S.,” said Taylor Odle, an assistant professor of educational policy studies at the University of Wisconsin at Madison.

    “Post-COVID, states and regions have implemented a lot of interventions to help mitigate learning loss and accelerate learning for specific individuals. We’ll be able to know by comparing region to region or school to school whether or not those gaps increased or reduced in certain areas.”

    Without uniform federal data to ground comparisons of pandemic-related and other student success interventions, it will be harder to hold education policymakers accountable, Odle and others told Inside Higher Ed this week. However, Odle believes that may be the point of the Trump administration’s assault on the Education Department’s research arm.

    “It’s in essence a tacit statement that what they are doing may potentially be harmful to students and schools, and they don’t want the American public or researchers to be able to clearly show that,” he said. “By eliminating these surveys and data collection, and reducing staff at the Department of Education who collect, synthesize and report the data, every decision-maker—regardless of where they fall on the political spectrum—is going to be limited in the data and information they have access to.”

    Scope of Data Loss Unclear

    It’s not clear how many of the department’s dozens of data-collection programs—including those related to early childhood education, college student outcomes and workforce readiness—will be downsized or ended as a result of the cuts. The department did not respond to Inside Higher Ed’s request for clarity on exactly which contracts were canceled. (It did confirm, however, that it still maintains contracts for the National Assessment of Educational Progress, the College Scorecard and the Integrated Postsecondary Education Data System.)

    A now-fired longtime NCES employee who asked to remain anonymous out of fear of retaliation said they and others who worked on those data-collection programs for years are still in the dark on the future of many of the other studies IES administers.

    “We’ve been out of the loop on all these conversations about the state of these studies. That’s been taking place at a higher level—or outside of NCES entirely,” said the terminated employee. “What these federal sources do is synthesize all the different other data sources that already exist to provide a more comprehensive national picture in a way that saves researchers a lot of the trouble of having to combine these different sources themselves and match them up. It provides consistent methodologies.”

    Even if some of the data-collection programs continue, there will be hardly any NCES staff to help researchers and policymakers accurately navigate new or existing data, which was the primary function of most workers there.

    “We are a nonpartisan agency, so we’ve always shied away from interpreting or making value judgments about what the data say,” the fired NCES worker said. “We are basically a help desk and support resource for people who are trying to use this data in their own studies and their own projects.”

    ‘Jeopardizing’ Strong Workforce

    One widely used data set with an uncertain future is the Beginning Postsecondary Students Longitudinal Study—a detailed survey that has followed cohorts of first-time college students over a period of six to eight years since 1989. The latest iteration of the BPS survey has been underway since 2019, and it included questions meant to illuminate the long-term effects of pandemic-related learning loss. But like many other NCES studies, data collection for BPS has been on pause since last month, when the department pulled the survey’s contract with the Research Triangle Institute.

    In a blog post the Institute for Higher Education Policy published Wednesday, the organization noted that BPS is intertwined with the National Postsecondary Student Aid Study, which is a comprehensive nationwide study designed to determine how students and their families pay for college and demographic characteristics of those enrolled.

    The two studies “are the only federal data sources that provide comprehensive insights into how students manage college affordability, stay enrolled and engaged with campus resources, persist to completion, and transition to the workforce,” Taylor Myers, assistant director of research and policy, wrote. “Losing these critical data hinders policy improvements and limits our understanding of the realities students face.”

    That post came one day after IHEP sent members of Congress a letter signed by a coalition of 87 higher education organizations and individual researchers urging lawmakers to demand transparency about why the department slashed funding for postsecondary data collection.

    “These actions weaken our capacity to assess and improve educational and economic outcomes for students—directly jeopardizing our ability to build a globally competitive workforce,” the letter said. “Without these insights, policymakers will soon be forced to make decisions in the dark, unable to steward taxpayer dollars efficiently.”

    Picking Up the Slack

    But not every education researcher believes federal data is as vital to shaping education policy and evaluating interventions as IHEP’s letter claims.

    “It’s unclear that researchers analyzing those data have done anything to alter outcomes for students,” said Jay Greene, a senior research fellow in the Center for Education Policy at the right-wing Heritage Foundation. “Me being able to publish articles is not the same thing as students benefiting. We have this assumption that research should prove things, but in the world of education, we have very little evidence of that.”

    Greene, who previously worked as a professor of education policy at the University of Arkansas, said he never used federal data in his assessments of educational interventions and instead used state-level data or collected his own. “Because states and localities actually run schools, they’re in a position to do things that might make it better or worse,” he said. “Federal data is just sampling … It’s not particularly useful for causal research designs to develop practices and interventions that improve education outcomes.”

    Other researchers have a more measured view of what needs to change in federal education data collection.

    Robin Lake, director of the Center on Reinventing Public Education at Arizona State University, has previously called for reforms at IES, arguing that some of the studies are too expensive without enough focus on educators’ evolving priorities, which as of late include literacy, mathematics and how to handle the rise of artificial intelligence.

    But taking a sledgehammer to NCES isn’t the reform she had in mind. Moreover, she said blaming federal education data collections and researchers for poor education outcomes is “completely ridiculous.”

    “There’s a breakdown between knowledge and practice in the education world,” Lake said. “We don’t adopt things that work at the scale we need to, but that’s not on researchers or the quality of research that’s being produced.”

    But just because federal education data collection may not focus on specific interventions, “that doesn’t mean those data sets aren’t useful,” said Christina Whitfield, senior vice president and chief of staff for the State Higher Education Executive Officers Association.

    “A lot of states have really robust data systems, and in a lot of cases they provide more detail than the federal data systems do,” she said. “However, one of the things the federal data provides is a shared language and common set of definitions … If we move toward every state defining these key elements individually or separately, we lose a lot of comparability.”

    If many of the federal data collection projects aren’t revived, Whitfield said other entities, including nonprofits and corporations, will likely step in to fill the void. But that likely won’t be a seamless transition without consequence.

    “At least in the short term, there’s going to be a real issue of how to vet those different solutions and determine which is the highest-quality, efficient and most useful response to the information vacuum we’re going to experience,” Whitfield said. And even if there’s just a pause on some of the data collections and federal contracts are able to resume eventually, “there’s going to be a gap and a real loss in the continuity of that data and how well you can look back longitudinally.”

    Source link

  • Three Ways Faculty Are Using AI to Lighten Their Professional Load

    Three Ways Faculty Are Using AI to Lighten Their Professional Load

    Reading Time: 4 minutes

    Our most recent research into the working lives of faculty gave us some interesting takeaways about higher education’s relationship with AI. While every faculty member’s thoughts about AI differ and no two experiences are the same, the general trend we’ve seen is that faculty have moved from fear to acceptance. A good deal of faculty were initially concerned about AI’s arrival on campus. This concern was amplified by a perceived rise in AI-enabled cheating and plagiarism among students. Despite that, many faculty have come to accept that AI is here to stay. Some have developed working strategies to ensure that they and their students know the boundaries of AI usage in the classroom.

    Early-adopting educators aren’t just navigating around AI. They have embraced and integrated it into their working lives. Some have learned to use AI tools to save time and make their working lives easier. In fact, over half of instructors reported that they wanted to use AI for administrative tasks and 10% were already doing so. (Find the highlights here.) As more faculty are seeing the potential in AI, that number has likely risen. So, in what ways are faculty already using AI to lighten the load of professional life? Here are three use-cases we learned about from education professionals:

    1. AI to jumpstart ideas and conversations

    “Give me a list of 10 German pop songs that contain irregular verbs.”

    “Summarize the five most contentious legal battles happening in U.S. media law today.”

    “Create a set of flashcards that review the diagnostic procedure and standard treatment protocol for asthma.”

    The possibilities (and the prompts!) are endless. AI is well-placed to assist with idea generation, conversation-starters and lesson materials for educators on any topic. It’s worth noting that AI tends to prove most helpful as a starting point for teaching and learning fodder, rather than for providing fully-baked responses and ideas. Those who expect the latter may be disappointed, as the quality of AI results can vary widely depending on the topic. Educators can and should, of course, always be the final determinants and reviewers of the accuracy of anything shared in class.

    1. AI to differentiate instruction

    Faculty have told us that they spend a hefty proportion (around 28%) of their time on course preparation. Differentiating instruction for the various learning styles and levels in any given class constitutes a big part of that prep work. A particular lesson may land well with a struggling student, but might feel monotonous for an advanced student who has already mastered the material. To that end, some faculty are using AI to readily differentiate lesson plans. For example, an English literature instructor might enter a prompt like, “I need two versions of a lesson plan about ‘The Canterbury Tales;’ one for fluent English speakers and one for emergent English speakers.” This simple step can save faculty hours of manual lesson plan differentiation.

    An instructor in Kansas shared with Cengage their plans to let AI help in this area, “I plan to use AI to evaluate students’ knowledge levels and learning abilities and create personalized training content. For example, AI will assess all the students at the beginning of the semester and divide them into ‘math-strong’ and ‘math-weak’ groups based on their mathematical aptitude, and then automatically assign math-related materials, readings and lecture notes to help the ‘math-weak’ students.”

    When used in this way, AI can be a powerful tool that gives students of all backgrounds an equal edge in understanding and retaining difficult information.

    1. AI to provide feedback

    Reviewing the work of dozens or hundreds of students and finding common threads and weak spots is tedious work, and seems an obvious area for a little algorithmic assistance.

    Again, faculty should remain in control of the feedback they provide to students. After all, students fully expect faculty members to review and critique their work authentically. However, using AI to more deeply understand areas where a student’s logic may be consistently flawed, or types of work on which they repeatedly make mistakes, can be a game-changer, both for educators and students.

    An instructor in Iowa told Cengage, “I don’t want to automate my feedback completely, but having AI suggest areas of exigence in students’ work, or supply me with feedback options based on my own past feedback, could be useful.”

    Some faculty may even choose to have students ask AI for feedback themselves as part of a critical thinking or review exercise. Ethan and Lilach Mollick of the Wharton School of the University of Pennsylvania share in an Harvard Business Publishing Education article, “Though AI-generated feedback cannot replicate the grounded knowledge that teachers have about their students, it can be given quickly and at scale and it can help students consider their work from an outside perspective. Students can then evaluate the feedback, decide what they want to incorporate, and continue to iterate on their drafts.”

    AI is not a “fix-all” for the administrative side of higher education. However, many faculty members are gaining an advantage and getting some time back by using it as something of a virtual assistant.

     

    Are you using AI in the classroom?

    In a future piece, we’ll share 3 more ways in which faculty are using AI to make their working lives easier. In the meantime, you can fully explore our research here:

     

     

     

    Source link

  • DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74

    DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    When teens and young adults with disabilities in California’s Poway Unified School District heard about a new opportunity to get extra help planning for life after high school, nearly every eligible student signed up.

    The program, known as Charting My Path for Future Success, aimed to fill a major gap in education research about what kinds of support give students nearing graduation the best shot at living independently, finding work, or continuing their studies.

    Students with disabilities finish college at much lower rates than their non-disabled peers, and often struggle to tap into state employment programs for adults with disabilities, said Stacey McCrath-Smith, a director of special education at Poway Unified, which had 135 students participating in the program. So the extra help, which included learning how to track goals on a tool designed for high schoolers with disabilities, was much needed.

    Charting My Path launched earlier this school year in Poway Unified and 12 other school districts. The salaries of 61 school staff nationwide, and the training they received to work with nearly 1,100 high schoolers with disabilities for a year and a half, was paid for by the U.S. Department of Education.

    Jessie Damroth’s 17-year-old son Logan, who has autism, attention deficit hyperactivity disorder, and other medical needs, had attended classes and met with his mentor through the program at Newton Public Schools in Massachusetts for a month. For the first time, he was talking excitedly about career options in science and what he might study at college.

    “He was starting to talk about what his path would look like,” Damroth said. “It was exciting to hear him get really excited about these opportunities. … He needed that extra support to really reinforce that he could do this.”

    Then the Trump administration pulled the plug.

    Charting My Path was among more than 200 Education Department contracts and grants terminated over the last two weeks by the Trump administration’s U.S. DOGE Service. DOGE has slashed spending it deemed to be wasteful, fraudulent, or in service of diversity, equity, inclusion, and accessibility goals that President Donald Trump has sought to ban. But in several instances, the decision to cancel contracts affected more than researchers analyzing data in their offices — it affected students.

    Many projects, like Charting My Path, involved training teachers in new methods, testing learning materials in actual classrooms, and helping school systems use data more effectively.

    “Students were going to learn really how to set goals and track progress themselves, rather than having it be done for them,” McCrath-Smith said. “That is the skill that they will need post-high school when there’s not a teacher around.”

    All of that work was abruptly halted — in some cases with nearly finished results that now cannot be distributed.

    Every administration is entitled to set its own priorities, and contracts can be canceled or changed, said Steven Fleischman, an education consultant who for many years ran one of the regional research programs that was terminated. He compared it to a homeowner deciding they no longer want a deck as part of their remodel.

    But the current approach reminds him more of construction projects started and then abandoned during the Great Recession, in some cases leaving giant holes that sat for years.

    “You can walk around and say, ‘Oh, that was a building we never finished because the funds got cut off,’” he said.

    DOGE drives cuts to education research contracts, grants

    The Education Department has been a prime target of DOGE, the chaotic cost-cutting initiative led by billionaire Elon Musk, now a senior adviser to Trump.

    So far, DOGE has halted 89 education projects, many of which were under the purview of the Institute of Education Sciences, the ostensibly independent research arm of the Education Department. The administration said those cuts, which included multi-year contracts, totaled $881 million. In recent years, the federal government has spent just over $800 million on the entire IES budget.

    DOGE has also shut down 10 regional labs that conduct research for states and local schools and shuttered four equity assistance centers that help with teacher training. The Trump administration also cut off funding for nearly 100 teacher training grants and 18 grants for centers that often work to improve instruction for struggling students.

    The total savings is up for debate. The Trump administration said the terminated Education Department contracts and grants were worth $2 billion. But some were near completion with most of the money already spent.

    An NPR analysis of all of DOGE’s reported savings found that it likely was around $2 billion for the entire federal government — though the Education Department is a top contributor.

    On Friday, a federal judge issued an injunction that temporarily blocks the Trump administration from canceling additional contracts and grants that might violate the anti-DEIA executive order. It’s not clear whether the injunction would prevent more contracts from being canceled “for convenience.”

    Mark Schneider, the recent past IES director, said the sweeping cuts represent an opportunity to overhaul a bloated education research establishment. But even many conservative critics have expressed alarm at how wide-ranging and indiscriminate the cuts have been. Congress mandated many of the terminated programs, which also indirectly support state and privately funded research.

    The canceled projects include contracts that support maintenance of the Common Core of Data, a major database used by policymakers, researchers, and journalists, as well as work that supports updates to the What Works Clearinghouse, a huge repository of evidence-based practices available to educators for free.

    And after promising not to make any cuts to the National Assessment of Educational Progress, known as the nation’s report card, the department canceled an upcoming test for 17-year-olds that helps researchers understand long-term trends. On Monday, Peggy Carr, the head of the National Center for Education Statistics, which oversees NAEP, was placed on leave.

    The Education Department did not respond to questions about who decided which programs to cut and what criteria were used. Nor did the department respond to a specific question about why Charting My Path was eliminated. DOGE records estimate the administration saved $22 million by terminating the program early, less than half the $54 million in the original contract.

    The decision has caused mid-year disruptions and uncertainty.

    In Utah, the Canyons School District is trying to reassign the school counselor and three teachers whose salaries were covered by the Charting My Path contract.

    The district, which had 88 high schoolers participating in the program, is hoping to keep using the curriculum to boost its usual services, said Kirsten Stewart, a district spokesperson.

    Officials in Poway Unified, too, hope schools can use the curriculum and tools to keep up a version of the program. But that will take time and work because the program’s four teachers had to be reassigned to other jobs.

    “They dedicated that time and got really important training,” McCrath-Smith said. “We don’t want to see that squandered.”

    For Damroth, the loss of parent support meetings through Charting My Path was especially devastating. Logan has a rare genetic mutation that causes him to fall asleep easily during the day, so Damroth wanted help navigating which colleges might be able to offer extra scheduling support.

    “I have a million questions about this. Instead of just hearing ‘I don’t know’ I was really looking forward to working with Joe and the program,” she said, referring to Logan’s former mentor. “It’s just heartbreaking. I feel like this wasn’t well thought out. … My child wants to do things in life, but he needs to be given the tools to achieve those goals and those dreams that he has.”

    DOGE cuts labs that helped ‘Mississippi Miracle’ in reading

    The dramatic improvement in reading proficiency that Carey Wright oversaw as state superintendent in one the nation’s poorest states became known as the “Mississippi Miracle.”

    Regional Educational Laboratory Southeast, based out of the Florida Center for Reading Research at Florida State University, was a key partner in that work, Wright said.

    When Wright wondered if state-funded instructional coaches were really making a difference, REL Southeast dispatched a team to observe, videotape, and analyze the instruction delivered by hundreds of elementary teachers across the state. Researchers reported that teachers’ instructional practices aligned well with the science of reading and that teachers themselves said they felt far more knowledgeable about teaching reading.

    “That solidified for me that the money that we were putting into professional learning was working,” Wright said.

    The study, she noted, arose from a casual conversation with researchers at REL Southeast: “That’s the kind of give and take that the RELs had with the states.”

    Wright, now Maryland state superintendent, said she was looking forward to partnering with REL Mid-Atlantic on a math initiative and on an overhaul of the school accountability system.

    But this month, termination letters went out to the universities and research organizations that run the 10 Regional Educational Laboratories, which were established by Congress in 1965 to serve states and school districts. The letters said the contracts were being terminated “for convenience.”

    The press release that went to news organizations cited “wasteful and ideologically driven spending” and named a single project in Ohio that involved equity audits as a part of an effort to reduce suspensions. Most of the REL projects on the IES website involve reading, math, career connections, and teacher retention.

    Jannelle Kubinec, CEO of WestEd, an education research organization that held the contracts for REL West and REL Northwest, said she never received a complaint or a request to review the contracts before receiving termination letters. Her team had to abruptly cancel meetings to go over results with school districts. In other cases, reports are nearly finished but cannot be distributed because they haven’t gone through the review process.

    REL West was also working with the Utah State Board of Education to figure out if the legislature’s investment in programs to keep early career teachers from leaving the classroom was making a difference, among several other projects.

    “This is good work and we are trying to think through our options,” she said. “But the cancellation does limit our ability to finish the work.”

    Given enough time, Utah should be able to find a staffer to analyze the data collected by REL West, said Sharon Turner, a spokesperson for the Utah State Board of Education. But the findings are much less likely to be shared with other states.

    The most recent contracts started in 2022 and were set to run through 2027.

    The Trump administration said it planned to enter into new contracts for the RELs to satisfy “statutory requirements” and better serve schools and states, though it’s unclear what that will entail.

    “The states drive the research agendas of the RELs,” said Sara Schapiro, the executive director of the Alliance for Learning Innovation, a coalition that advocates for more effective education research. If the federal government dictates what RELs can do, “it runs counter to the whole argument that they want the states to be leading the way on education.”

    Some terminated federal education research was nearly complete

    Some research efforts were nearly complete when they got shut down, raising questions about how efficient these cuts were.

    The American Institutes for Research, for example, was almost done evaluating the impact of the Comprehensive Literacy State Development program, which aims to improve literacy instruction through investments like new curriculum and teacher training.

    AIR’s research spanned 114 elementary schools across 11 states and involved more than 23,000 third, fourth, and fifth graders and their nearly 900 reading teachers.

    Researchers had collected and analyzed a massive trove of data from the randomized trial and presented their findings to federal education officials just three days before the study was terminated.

    “It was a very exciting meeting,” said Mike Garet, a vice president and institute fellow at AIR who oversaw the study. “People were very enthusiastic about the report.”

    Another AIR study that was nearing completion looked at the use of multi-tiered systems of support for reading among first and second graders. It’s a strategy that helps schools identify and provide support to struggling readers, with the most intensive help going to kids with the highest needs. It’s widely used by schools, but its effectiveness hasn’t been tested on a larger scale.

    The research took place in 106 schools and involved over 1,200 educators and 5,700 children who started first grade in 2021 and 2022. Much of the funding for the study went toward paying for teacher training and coaching to roll out the program over three years. All of the data was collected and nearly done being analyzed when DOGE made its cuts.

    Garet doesn’t think he and his team should simply walk away from unfinished work.

    “If we can’t report results, that would violate our covenant with the districts, the teachers, the parents, and the students who devoted a lot of time in the hope of generating knowledge about what works,” Garet said. “Now that we have the data and have the results, I think we’re duty-bound to report them.”

    This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at ckbe.at/newsletters.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • AI Support for Teachers

    AI Support for Teachers

    Collaborative Classroom, a leading nonprofit publisher of K–12 instructional materials, announces the publication of SIPPS, a systematic decoding program. Now in a new fifth edition, this research-based program accelerates mastery of vital foundational reading skills for both new and striving readers.

    Twenty-Five Years of Transforming Literacy Outcomes

    “As educators, we know the ability to read proficiently is one of the strongest predictors of academic and life success,” said Kelly Stuart, President and CEO of Collaborative Classroom. “Third-party studies have proven the power of SIPPS. This program has a 25-year track record of transforming literacy outcomes for students of all ages, whether they are kindergarteners learning to read or high schoolers struggling with persistent gaps in their foundational skills.

    “By accelerating students’ mastery of foundational skills and empowering teachers with the tools and learning to deliver effective, evidence-aligned instruction, SIPPS makes a lasting impact.”

    What Makes SIPPS Effective?

    Aligned with the science of reading, SIPPS provides explicit, systematic instruction in phonological awareness, spelling-sound correspondences, and high-frequency words. 

    Through differentiated small-group instruction tailored to students’ specific needs, SIPPS ensures every student receives the necessary targeted support—making the most of every instructional minute—to achieve grade-level reading success.

    SIPPS is uniquely effective because it accelerates foundational skills through its mastery-based and small-group targeted instructional design,” said Linda Diamond, author of the Teaching Reading Sourcebook. “Grounded in the research on explicit instruction, SIPPS provides ample practice, active engagement, and frequent response opportunities, all validated as essential for initial learning and retention of learning.”

    Personalized, AI-Powered Teacher Support

    Educators using SIPPS Fifth Edition have access to a brand-new feature: immediate, personalized responses to their implementation questions with CC AI Assistant, a generative AI-powered chatbot.

    Exclusively trained on Collaborative Classroom’s intellectual content and proprietary program data, CC AI Assistant provides accurate, reliable information for educators.

    Other Key Features of SIPPS, Fifth Edition

    • Tailored Placement and Progress Assessments: A quick, 3–8 minute placement assessment ensures each student starts exactly at their point of instructional need. Ongoing assessments help monitor progress, adjust pacing, and support grouping decisions.
    • Differentiated Small-Group Instruction: SIPPS maximizes instructional time by focusing on small groups of students with similar needs, ensuring targeted, effective teaching.
    • Supportive of Multilingual Learners: Best practices in multilingual learner (ML) instruction and English language development strategies are integrated into the design of SIPPS.
    • Engaging and Effective for Older Readers: SIPPS Plus and SIPPS Challenge Level are specifically designed for students in grades 4–12, offering age-appropriate texts and instruction to close lingering foundational skill gaps.
    • Multimodal Supports: Integrated visual, auditory, and kinesthetic-tactile strategies help all learners, including multilingual students.
    • Flexible, Adaptable, and Easy to Teach: Highly supportive for teachers, tutors, and other adults working in classrooms and expanded learning settings, SIPPS is easy to implement well. A wraparound system of professional learning support ensures success for every implementer.

    Accelerating Reading Success for Students of All Ages

    In small-group settings, students actively engage in routines that reinforce phonics and decoding strategies, practice with aligned texts, and receive immediate feedback—all of which contribute to measurable gains.

    “With SIPPS, students get the tools needed to read, write, and understand text that’s tailored to their specific abilities,” said Desiree Torres, ENL teacher and 6th Grade Team Lead at Dr. Richard Izquierdo Health and Science Charter School in New York. “The boost to their self-esteem when we conference about their exam results is priceless. Each and every student improves with the SIPPS program.” 

    Kevin Hogan
    Latest posts by Kevin Hogan (see all)

    Source link

  • The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    In December, The Wall Street Journal reported:

    [President-elect Donald Trump’s nominee to lead the National Institutes of Health] Dr. Jay Bhattacharya […] is considering a plan to link a university’s likelihood of receiving research grants to some ranking or measure of academic freedom on campus, people familiar with his thinking said. […] He isn’t yet sure how to measure academic freedom, but he has looked at how a nonprofit called Foundation for Individual Rights in Education scores universities in its freedom-of-speech rankings, a person familiar with his thinking said.

    We believe in and stand by the importance of the College Free Speech Rankings. More attention to the deleterious effect restrictions on free speech and academic freedom have on research at our universities is desperately needed, so hearing that they are being considered as a guidepost for NIH grantmaking is heartening. Dr. Bhattacharya’s own right to academic freedom was challenged by his Stanford University colleagues, so his concerns about its effect on NIH’s grants is understandable.

    However, our College Free Speech Rankings are not the right tool for this particular job. They were designed with a specific purpose in mind — to help students and parents find campuses where students are both free and comfortable expressing themselves. They were not intended to evaluate the climate for conducting academic research on individual campuses and are a bad fit for that purpose. 

    While the rankings assess speech codes that apply to students, the rankings do not currently assess policies pertaining to the academic freedom rights and research conduct of professors, who are the primary recipients of NIH grants. Nor do the rankings assess faculty sentiment about their campus climates. It would be a mistake to use the rankings beyond their intended purpose — and, if the rankings were used to deny funding for important research that would in fact be properly conducted, that mistake would be extremely costly.

    FIRE instead proposes three ways that would be more appropriate for NIH to use its considerable power to improve academic freedom on campus and ensure research is conducted in an environment most conducive to finding the most accurate results.

    1. Use grant agreements to safeguard academic freedom as a strong contractual right. 
    2. Encourage open data practices to promote research integrity.
    3. Incentivize universities to study their campus climates for academic freedom.

    Why should the National Institutes of Health care about academic freedom at all?

    The pursuit of truth demands that researchers be able to follow the science wherever it leads, without fear, favor, or external interference. To ensure that is the case, NIH has a strong interest in ensuring academic freedom rights are inviolable. 

    As a steward of considerable taxpayer money, NIH has an obligation to ensure it spends its funds on high-quality research free from censorship or other interference from politicians or college and university administrators.

    Why the National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to decide where to send funds

    FIRE’s College Free Speech Rankings (CFSR) were never intended for use in determining research spending. As such, it has a number of design features that make it ill-suited to that purpose, either in its totality or through its constituent parts.

    Firstly, like the U.S. News & World Report college rankings, a key reason for the creation of the CFSRs was to provide information to prospective undergraduate students and their parents. As such, it heavily emphasizes students’ perceptions of the campus climate over the perceptions of faculty or researchers. In line with that student focus, our attitude and climate components are based on a survey of undergraduates. Additionally, the speech policies that we evaluate and incorporate into the rankings are those that affect students. We do not evaluate policies that affect faculty and researchers, which are often different and would be of greater relevance to deciding research funding. While it makes sense that there may be some correlation, we have no way of knowing whether or the degree to which that might be true.

    Secondly, for the component that most directly implicates the academic freedom of faculty, we penalize schools for attempts to sanction scholars for their protected speech, as tracked in our Scholars Under Fire database. While our Scholars Under Fire database provides excellent datapoints for understanding the climate at a university, it does not function as a systematic proxy for assessing academic freedom on a given campus as a whole. As one example, a university with relatively strong protection for academic freedom may have vocal professors with unpopular viewpoints that draw condemnation and calls for sanction that could hurt its ranking, while a climate where professors feel too afraid to voice controversial opinions could draw relatively few calls for sanction and thus enjoy a higher ranking. This shortcoming is mitigated when considered alongside the rest of our rankings components, but as discussed above, those other components mostly concern students rather than faculty.

    Thirdly, using CFSR to determine NIH funding could — counterintuitively — be abused by vigilante censors. Because we penalize schools for attempted and successful shoutdowns, the possibility of a loss of NIH funding could incentivize activists who want leverage over a university to disrupt as many events as possible in order to negatively influence its ranking, and thus its funding prospects. Even the threat of disruption could thus give censors undue power over a university administration that fears loss of funding.

    Finally, due to resource limitations, we do not rank all research universities. It would not be fair to deny funding to an unranked university or to fund an unranked university with a poor speech climate over a low-ranked university.

    Legal boundaries for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    While NIH has considerable latitude to determine how it spends taxpayer money, as an arm of the government, the First Amendment places restrictions on how NIH may use that power. Notably, any solution must not penalize institutions for protected speech or scholarship by students or faculty unrelated to NIH granted projects. NIH could not, for example, require that a university quash protected protests as a criteria for eligibility, or deny a university eligibility because of controversial research undertaken by a scholar who does not work on NIH-funded research.

    While NIH can (and effectively must) consider the content of applications in determining what to fund, eligibility must be open to all regardless of viewpoint. Even were this not the case as a constitutional matter (and it is, very much so), it is important as a prudential matter. People would be understandably skeptical of, if not downright disbelieve, scientific results obtained through a grant process with an obvious ideological filter. Indeed, that is the root of much of the current skepticism over federally funded science, and the exact situation academic freedom is intended to avoid.

    Additionally, NIH cannot impose a political litmus test on an individual or an institution, or compel an institution or individual to take a position on political or scientific issues as a condition of grant funding.

    In other words, any solution to improve academic freedom:

    • Must be viewpoint neutral;
    • Must not impose an ideological or political litmus test; and
    • Must not penalize an institution for protected speech or scholarship by its scholars or students.

    Guidelines for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    NIH should carefully tailor any solution to directly enhance academic freedom and to further NIH’s goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science.” Going beyond that purpose to touch on issues and policies that don’t directly affect the conduct of NIH grant-funded research may leave such a policy vulnerable to legal challenge.

    Any solution should, similarly, avoid using vague or politicized terms such as “wokeness” or “diversity, equity, and inclusion.” Doing so creates needless skepticism of the process and — as FIRE knows all too well — introduces uncertainty as professors and institutions parse what is and isn’t allowed.

    Enforcement mechanisms should be a function of contractual promises of academic freedom, rather than left to apathetic accreditors or the unbounded whims of bureaucrats on campus or officials in government, for several reasons. 

    Regarding accreditors, FIRE over the years has reported many violations of academic freedom to accreditors who require institutions to uphold academic freedom as a precondition for their accreditation. Up to now, the accreditors FIRE has contacted have shown themselves wholly uninterested in enforcing their academic freedom requirements.

    When it comes to administrators, FIRE has documented countless examples of campus administrators violating academic freedom, either due to politics, or because they put the rights of the professor second to the perceived interests of their institution.

    As for government actors, we have seen priorities and politics shift dramatically from one administration to the next. It would be best for everyone involved if NIH funding did not ping-pong between ideological poles as a function of each presidential election, as the Title IX regulations now do. Dramatic changes to how NIH conceives as academic freedom with every new political administration would only create uncertainty that is sure to further chill speech and research.

    While the courts have been decidedly imperfect protectors of academic freedom, they have a better record than accreditors, administrators, or partisan government officials in parsing protected conduct from unprotected conduct. And that will likely be even more true with a strong, unambiguous contractual promise of academic freedom. Speaking of which…

    The National Institutes of Health should condition grants of research funds on recipient institutions adopting a strong contractual promise of academic freedom for their faculty and researchers

    The most impactful change NIH could enact would be to require as a condition of eligibility that institutions adopt strong academic freedom commitments, such as the 1940 Statement of Principles on Academic Freedom and Tenure or similar, and make those commitments explicitly enforceable as a contractual right for their faculty members and researchers.

    The status quo for academic freedom is one where nearly every institution of higher education makes promises of academic freedom and freedom of expression to its students and faculty. Yet only at public universities, where the First Amendment applies, are these promises construed with any consistency as an enforceable legal right. 

    Private universities, when sued for violating their promises of free speech and academic freedom, frequently argue that those promises are purely aspirational and that they are not bound by them (often at the same time that they argue faculty and students are bound by the policies). 

    Too often, courts accept this and universities prevail despite the obvious hypocrisy. NIH could stop private universities’ attempts to have their cake and eat it too by requiring them to legally stand by the promises of academic freedom that they so readily abandon when it suits them.

    NIH could additionally require that this contractual promise come with standard due process protections for those filing grievances at their institution, including:

    • The right to bring an academic freedom grievance before an objective panel;
    • The right to present evidence;
    • The right to speedy resolution;
    • The right to written explanation of findings including facts and reasons; and
    • The right to appeal.

    If the professor exhausts these options, they may sue for breach of the contract. To reduce the burden of litigation, NIH could require that, if a faculty member prevails in a lawsuit over a violation of academic freedom, the violating institution would not be eligible for future NIH funding until they pay the legal fees of the aggrieved faculty member.

    NIH could also study violations of academic freedom by creating a system for those connected to NIH-funded research to report violations of academic freedom or scientific integrity.

    It would further be proper for NIH to require institutions to eliminate any political litmus tests, such as mandatory DEI statements, as a condition of grant eligibility.

    The National Institutes of Health can implement strong measures to protect transparency and integrity in science

    NIH could encourage open science and transparency principles by heavily favoring studies that are pre-registered. Additionally, to obviate concerns that scientific results may be suppressed or buried because they are unpopular or politically inconvenient, NIH could require its grant-funded research to make available data (with proper privacy safeguards) following the completion of the project. 

    To help deal with the perverse incentives that have created the replication crisis and undermined public trust in science, NIH could create impactful incentives for work on replications and the publication of null results.

    Finally, NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.

    The National Institutes of Health can incentivize study into campus climates for academic freedom

    As noted before, FIRE’s College Free Speech Rankings focus on students. Due to logistical and resource difficulties surveying faculty, our 2024 Faculty Report looking into many of the same issues took much longer and had to be limited in scope to 55 campuses, compared to the 250+ in the CFSR. This is to say there is a strong need for research to understand faculty views and experiences on academic freedom. After all, we cannot solve a problem until we understand it. To that effect, NIH should incentivize further study into faculty’s academic freedom.

    It is important to note that these studies should be informational and not used in a punitive manner, or to decide on NIH funding eligibility. This is because tying something as important as NIH funding to the results of the survey would create so significant an incentive to influence the results that the data would be impossible to trust. Even putting aside malicious interference by administrators and other faculty members, few faculty would be likely to give honest answers that imperiled institutional funding, knowing the resulting loss in funding might threaten their own jobs.

    Efforts to do these kinds of surveys in Wisconsin and Florida proved politically controversial, and at least initially, led to boycotts, which threatened to compromise the quality and reliability of the data. As such, it’s critical that any such survey be carried out in a way that maximizes trust, under the following principles:

    • Ideally, the administration of these surveys should be done by an unbiased third party — not the schools themselves, or NIH. This third party should include respected researchers across the political spectrum and no partisan slant.
    • The survey sample must be randomized and not opt-in.
    • The questionnaire must be made public beforehand, and every effort should be made for the questions to be worded without any overt partisanship or ideology that would reduce trust.

    Conclusion: With great power…

    FIRE has for the last two decades been America’s premier defender of free speech and academic freedom on campus. Following Frederick Douglass’s wise dictum, “I would unite with anybody to do right and with nobody to do wrong,” we’ve worked with Democrats, Republicans, and everyone in between (and beyond) to advance free speech and open inquiry, and we’ve criticized them in turn whenever they’ve threatened these values.

    With that sense of both opportunity and caution, we would be heartened if NIH used its considerable power wisely in an effort to improve scientific integrity and academic freedom. But if wielded recklessly, that same considerable power threatens to do immense damage to science in the process. 

    We stand ready to advise if called upon, but integrity demands that we correct the record if we believe our data is being used for a purpose to which it isn’t suited.

    Source link

  • OpenAI invests $50M in higher ed research

    OpenAI invests $50M in higher ed research

    OpenAI announced Tuesday that it’s investing $50 million to start up NextGenAI, a new research consortium of 15 institutions that will be “dedicated to using AI to accelerate research breakthroughs and transform education.”

    The consortium, which includes 13 universities, is designed to “catalyze progress at a rate faster than any one institution would alone,” the company said in a news release.

    “The field of AI wouldn’t be where it is today without decades of work in the academic community. Continued collaboration is essential to build AI that benefits everyone,” Brad Lightcap, chief operating officer of OpenAI, said in the news release. “NextGenAI will accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI.”

    The company, which launched ChatGPT in late 2022, will give each of the consortium’s 15 institutions—including Boston Children’s Hospital and the Boston Public Library—millions in funding for research and access to computational resources as part of an effort “to support students, educators, and researchers advancing the frontiers of knowledge.” 

    Institutional initiatives supported by NextGenAI vary widely but will include projects focused on AI literacy, advancing medical research, expanding access to scholarly resources and enhancing teaching and learning. 

    The universities in the NextGenAI consortium are: 

    • California Institute of Technology
    • California State University system
    • Duke University
    • University of Georgia
    • Harvard University
    • Howard University
    • Massachusetts Institute of Technology
    • University of Michigan
    • University of Mississippi
    • Ohio State University
    • University of Oxford (U.K.)
    • Sciences Po (France)
    • Texas A&M University

    Source link

  • Building inclusive research cultures– How can we rise above EDI cynicism?

    Building inclusive research cultures– How can we rise above EDI cynicism?

    • Dr Elizabeth Morrow is Research Consultant, Senior Research Fellow Royal College of Surgeons in Ireland, & Public Contributor to the Shared Commitment to Public Involvement on behalf of National Institute for Health and Care Research.
    • Professor Tushna Vandrevala is Professor of Health Psychology, Kingston University.
    • Professor Fiona Ross CBE is Professor Emerita Health and Social Care Kingston University, Deputy Chair Westminster University Court of Governors & Trustee Great Ormond Street Hospital Charity.

    Commitment and Motivation for Inclusive Research

    The commitment to inclusivity in UK research cultures and practices will endure despite political shifts abroad and continue to thrive. Rooted in ethical and moral imperatives, inclusivity is fundamentally the right approach. Moreover, extensive evidence from sources such as The Lancet, UNESCO and WHO highlights the far-reaching benefits of inclusive research practices across sectors like healthcare and global development. These findings demonstrate that inclusivity not only enhances research quality but also fosters more equitable outcomes.

    We define ‘inclusive research’ as the intentional engagement of diverse voices, communities, perspectives, and experiences throughout the research process. This encompasses not only who conducts the research but also how it is governed, funded, and integrated into broader systems, such as policy and practice.

    Beyond higher education, corporate leaders have increasingly embraced inclusivity. Research by McKinsey & Company shows that companies in the top quartile for gender diversity are 25% more likely to outperform their peers in profitability, while those leading in ethnic diversity are 36% more likely to do so. This clear link between inclusivity, innovation, and financial success reinforces the value of diverse teams in driving competitive advantage. Similarly, Egon Zehnder’s Global Board Diversity Tracker highlights how diverse leadership enhances corporate governance and decision-making, leading to superior financial performance and fostering innovation.

    Inclusion in research is a global priority as research systems worldwide have taken a ‘participative turn’ to address uncertainty and seek solutions to complex challenges such as Sustainable Development Goals. From climate change to the ethical and societal implications of Artificial Intelligence (AI), inclusive research is a track that ensures that diverse perspectives shape solutions that are effective, fair and socially responsible.

    Take the example of AI and gender bias – evidence shows that women are frequently not included in technology research and are underrepresented in data sets. This creates algorithms that are biased and can have negative consequences of sensitivity, authenticity, or uptake of AI-enabled interventions by women. Similar biases in AI have been found for other groups who are often overlooked because of their age, gender, sexuality, disability, or ethnicity, for example.

    Accelerating Inclusion in UK Research

    A recent horizon scan of concepts related to the UK research inclusion landscape indicates domains in which inclusive research is being developed and implemented, illustrated by Figure 1.

    Inclusion is being accelerated by the Research Excellence Framework (REF) 2029, with a stronger focus on assessing People, Culture, and Environment (PCE). REF 2029 emphasises the integration of EDI considerations across research institutions, with a focus on creating equitable and supportive cultures for researchers, participants and communities. The indicators and measures of inclusion that will be developed and used are important because they can encourage diversity of perspectives, knowledge, skills and worldviews into research processes and institutions, thereby increasing relevance and improved outcomes. All units of assessment and panels involved in the REF process will have guidance from the People and Diversity Advisory Panel and the Research Diversity Advisory Panel. This means that inclusion will develop in both the culture of research institutions and the practices that shape research assessment.

    The National Institute for Health Research, which is the largest funder of health and social care research, has pioneered inclusion for over 30 years and prioritises inclusion in its operating principles (see NIHR Research Inclusion Strategy 2022-2027). NIHR’s new requirements for Research Inclusion (RI) will be a powerful lever to address inequalities in health and care. NIHR now requires all its domestic commissioned research to address RI at the proposal stage, actively involve appropriate publics, learn from them and use this learning to inform impact strategies and practices.

    Given the learning across various domains, we ask: How can the broader UK system share knowledge and learn from the setbacks and successes in inclusion, rather than continually reinventing the wheel? By creating space in the system between research funders and institutions to share best practices, such as the Research Culture Enablers Network, we can accelerate progress and contribute to scaling up inclusive research across professional groups and disciplines. There are numerous examples of inclusive innovation, engaged research, and inclusive impact across disciplines and fields that could be shared to accelerate inclusion.

    Developing Shared Language and Inclusive Approaches

    Approaches to building inclusive cultures in research often come with passion and commitment from opinion leaders and change agents. As often happens when levering change, a technical language evolves that can become complex and, therefore, inaccessible to others. For example, acronyms like RI can apply to research inclusion, research integrity and responsible innovation. Furthermore, community-driven research, public and community engagement, and Patient and Public Involvement (PPI) have become synonymous with inclusive research, and such participation is an important driver of inclusion.

    The language and practices associated with inclusive research vary by discipline to reflect different contexts and goals. This can confuse rather than clarify and form barriers that possibly get in the way of trust and more effective inclusion strategies and practices. We ask: How can we establish shared understanding, methods of participation, accountability pathways and mechanisms that will promote inclusion in the different and dynamic contexts of UK research?

    With over 20 years of experience in the fields of inclusion and equity, like other researchers, we have found that interdisciplinary collaboration, participatory methods, co-production, and co-design offer valuable insights by listening to and engaging with publics and communities on their own terms and territory. An inclusive approach has deepened our understanding and provided new perspectives on framing, methodological development, and the critical interpretation of research.

    Final reflection

    Key questions to overcome EDI cynicism are: How can we deepen our understanding and integration of intersectionality, inclusive methods, open research, cultural competency, power dynamics, and equity considerations throughout research processes, institutions, and systems? There is always more to learn and this can be facilitated by inclusive research cultures.

    Figure 1. Inclusive Research Dimensions

    Source link

  • How will cutting NAEP for 17-year-olds impact postsecondary readiness research?

    How will cutting NAEP for 17-year-olds impact postsecondary readiness research?

    This audio is auto-generated. Please let us know if you have feedback.

    With the U.S. Department of Education’s cancellation of the National Assessment of Educational Progress for 17-year-olds, education researchers are losing one resource for evaluating post-high school readiness — though some say the test was already a missed opportunity since it hadn’t been administered since 2012.

    The department cited funding issues in its cancellation of the exam, which had been scheduled to take place this March through May.

    Since the 1970s, NAEP has monitored student performance in reading and math for students ages 9, 13 and 17. These assessments — long heralded as The Nation’s Report Card — measure students’ educational progress over long periods to identify and monitor trends in academic performance.

    The cancellation of the NAEP Long-Term Trend assessment for 17-year-olds came just days before the Trump administration abruptly placed Peggy Carr, commissioner of the National Center for Education Statistics and as such, the public voice of NAEP, on paid leave.

    Carr has worked for the Education Department and NCES for over 30 years through both Republican and Democratic administrations. President Joe Biden appointed her NCES commissioner in 2021, with a term to end in 2027.

    The decision to drop the 2025 NAEP for 17-year-olds also follows another abrupt decision by the Education Department and the Department of Government Efficiency, or DOGE, to cut about $881 million in multi-year education research contracts earlier this month. The Education Department had previously said NAEP would be excluded from those cuts.

    Compounding gaps in data

    “The cancellation of the Long-Term Trend assessment of 17-year-olds is not unprecedented,” said Madi Biedermann, deputy assistant secretary for communications for the Education Department, in an email.

    The assessment was supposed to be administered during the 2019-20 academic year, but COVID-19 canceled those plans.

    Some experts questioned the value of another assessment for 17-year-olds since the last one was so long ago.

    While longitudinal studies are an important tool for tracking inequity and potential disparities in students, the NAEP Long-Term Trend Age 17 assessment wasn’t able to do so because data hadn’t been collected as planned for more than a decade, according to Leigh McCallen, deputy executive director of research and evaluation at New York University Metropolitan Center for Research on Equity and the Transformation of Schools.

    “There weren’t any [recent] data points before this 2024 point, so in some ways it had already lost some of its value, because it hadn’t been administered,” McCallen said.

    McCallen added that she is more concerned about maintaining the two-year NAEP assessments for 9- and 13-year-olds, because their consistency over the years provides a random-sample temperature check.

    According to the Education Department’s Biedermann, these other longitudinal assessments are continuing as normal.

    Cheri Fancsali, executive director at the Research Alliance for New York City Schools, said data from this year’s 17-year-olds would have provided a look at how students are rebounding from the pandemic. Now is a critical time to get the latest update on that level of information, she said.

    Fancsali pointed out that the assessment is a vital tool for evaluating the effectiveness of educational policies and that dismantling these practices is a disservice to students and the public. She said she is concerned about the impact on vulnerable students, particularly those from low-income backgrounds and underresourced communities.

    “Without an assessment like NAEP, inequities become effectively invisible in our education system and, therefore, impossible to address,” Fancsali said. 

    While tests like the ACT or SAT are other indicators of post-high-school readiness at the national level, Fancsali said they offer a “skewed perspective,” because not every student takes them.

    “The NAEP is the only standard assessment across states and districts, so it gives the ability to compare over time in a way that you can’t with any other assessment at the local level,” Fancsali said.

    Fancsali emphasized the importance for parents, educators and policymakers to advocate for the need for an assessment like NAEP for both accountability and transparency.

    LIkewise, McCallen said that despite the lack of continuity in the assessment for 17-year-olds, its cancellation offers cause for concern.

    “It represents the seriousness of what’s going on,” McCallen said. “When you cancel these contracts, you really do lose a whole set of information and potential knowledge about students throughout this particular point of time.”

    Source link

  • How cuts at U.S. aid agency hinder university research

    How cuts at U.S. aid agency hinder university research

    Peter Goldsmith knows there’s a lot to love about soybeans. Although the crop is perhaps best known in America for its part in the stereotypically bougie soy milk latte, it plays an entirely different role on the global stage. Inexpensive to grow and chock-full of nutrients, it’s considered a potential solution to hunger and malnutrition.

    For the past 12 years, Goldsmith has worked toward that end. In 2013, he founded the Soybean Innovation Lab at the University of Illinois at Urbana-Champaign, and every day since then, the lab’s scientists have worked to help farmers and businesses solve problems related to soybeans, from how to speed up threshing—the arduous process of separating the bean from the pod—to addressing a lack of available soybean seeds and varieties.

    The SIL, which now encompasses a network of 17 laboratories, has completed work across 31 countries, mostly in sub-Saharan Africa. But now, all that work is on hold, and Goldsmith is preparing to shut down the Soybean Innovation Lab in April, thanks to massive cuts to the federal foreign aid funds that support the labs.

    A week into the current presidential administration, Goldsmith received notice that the Soybean Innovation Lab, which is headquartered at the University of Illinois, had to pause operations, cease external communications and minimize costs, pending a federal government review.

    Goldsmith told his team—about 30 individuals on UIUC’s campus that he described as being like family to one another—that, though they were ordered to stop work, they could continue working on internal projects, like refining their software. But days later, he learned the university could no longer access the lab’s funds in Washington, meaning there was no way to continue paying employees.

    After talking with university administrators, he set a date for the Illinois lab to close: April 15, unless the freeze ended after the government review. But no review materialized; on Feb. 26, the SIL received notice its grant had been terminated, along with about 90 percent of the U.S. Agency for International Development’s programs.

    “The University of Illinois is a very kind, caring sort of culture; [they] wanted to give employees—because it was completely an act of God, out of the blue—give them time to find jobs,” he said. “I mean, up until [Jan. 27], we were full throttle, we were very successful, phones ringing off the hook.”

    The other 16 labs will likely also close, though some are currently scrambling to try to secure other funding.

    Federal funding made up 99 percent of the Illinois lab’s funding, according to Goldsmith. In 2022, the lab received a $10 million grant intended to last through 2027.

    Dismantling an Agency

    The SIL is among the numerous university laboratories impacted by the federal freeze on U.S. Agency for International Development funds—an initial step in what’s become President Donald Trump’s crusade to curtail supposedly wasteful government spending—and the subsequent termination of thousands of grants.

    Trump and Elon Musk, the richest man on Earth and a senior aide to the president, have baselessly claimed that USAID is run by left-wing extremists and say they hope to shutter the agency entirely. USAID’s advocates, meanwhile, have countered that the agency instead is responsible for vital, lifesaving work abroad and that the funding freeze is sure to lead to disease, famine and death.

    A federal judge, Amir H. Ali, seemed to agree, ruling earlier this month that the funding freeze is doing irreparable harm to humanitarian organizations that have had to cut staff and halt projects, NPR and other outlets reported. On Tuesday, Ali reiterated his order that the administration resume funding USAID, giving them until the end of the day Wednesday to do so.

    But the administration appealed the ruling, and the Supreme Court subsequently paused the deadline until the justices can weigh in. Now, officials appear to be moving forward with plans to fire all but a small number of the agency’s employees, directing employees to empty their offices and giving them only 15 minutes each to gather their things.

    About $350 million of the agency’s funds were appropriated to universities, according to the Association of Public and Land-grant Universities, including $72 million for the Feed the Future Innovation Labs, which are aimed at researching solutions to end hunger and food insecurity worldwide. (The SIL is funded primarily by Feed the Future.)

    It’s a small amount compared to the funding universities receive from other agencies, like the National Institutes of Health, also the subject of deep cuts by Trump and Musk. But USAID-funded research is a long-standing and important part of the nation’s foreign policy, as well as a resource for the international community, advocates say. The work also has broad, bipartisan support; in fiscal year 2024, Congress increased funding for the Feed the Future Initiative labs by 16 percent, according to Craig Lindwarm, senior vice president for government affairs at the APLU, even in what he characterized as an extremely challenging budgetary environment.

    Potential Long-Term Harms

    Universities “have long been a partner with USAID … to help accomplish foreign policy and diplomatic goals of the United States,” said Lindwarm. “This can often but not exclusively come in the form of extending assistance as it relates to our agricultural institutions, and land-grant institutions have a long history of advancing science in agriculture that boosts yields and productivity in the United States and also partner countries, and we’ve found that this is a great benefit not just to our country, but also partner nations. Stable food systems lead to stable regions and greater market access for producers in the United States and furthers diplomatic objectives in establishing stronger connections with partner countries.”

    Stopping that research has negatively impacted “critical relationships and productivity,” with the potential for long-term harms, Lindwarm said.

    At the SIL, numerous projects have now been canceled, including a planned trip to Africa to beta test a pull-behind combine, a technology that is not commonly used anymore in the U.S.—most combines are now self-propelled rather than pulled by tractor—but that would be useful to farmers in Africa. A U.S. company was slated to license the technology to farmers in Africa, Goldsmith said, but now, “that’s dead. The agribusiness firm, the U.S. firm, won’t be licensing in Africa,” he said. “A good example of market entry just completely shut off.”

    He also noted that the lab closures won’t just impact clients abroad and U.S. companies; they will also be detrimental to UIUC, which did not respond to a request for comment.

    “In our space, we’re well-known. We’re really relevant. It makes the university extremely relevant,” he said. “We’re not an ivory tower. We’re in the dirt, literally, with our partners, with our clients, making a difference, and [that] makes the university an active contributor to solving real problems.”

    Source link

  • New research questions DOGE claims about ED cut savings

    New research questions DOGE claims about ED cut savings

    New research suggests that the Department of Government Efficiency has been making inaccurate claims about the extent of its savings from cuts to the Department of Education.

    DOGE previously posted on X that it ended 89 contracts from the Education Department’s research arm, the Institute of Education Sciences, worth $881 million. But an analysis released Wednesday by the left-wing think tank New America found that these contracts were worth about $676 million—roughly $200 million less than DOGE claimed. DOGE’s “Wall of Receipts” website, where it tracks its cuts, later suggested the savings from 104 Education Department contracts came out to a more modest $500 million.

    New America also asserted that DOGE is losing money, given that the government had already spent almost $400 million on the now-terminated Institute of Education Sciences contracts, meaning those funds have gone to waste.

    “Research cannot be undone, and statistics cannot be uncollected. Instead, they will likely sit on a computer somewhere untouched,” New America researchers wrote in a blog post about their findings.

    In a separate analysis shared last week, the American Enterprise Institute, a right-leaning think tank, also called into question DOGE’s claims about its Education Department cuts.

    Nat Malkus, senior fellow and deputy director of education policy studies at AEI, compared DOGE’s contract values with the department’s listed values and found they “seldom matched” and DOGE’s values were “always higher,” among other problems with DOGE’s data.

    “DOGE has an unprecedented opportunity to cut waste and bloat,” Malkus said in a post about his research. “However, the sloppy work shown so far should give pause to even its most sympathetic defenders.”

    Source link