Tag: Experts

  • Experts react to artificial intelligence plan – Campus Review

    Experts react to artificial intelligence plan – Campus Review

    Australia’s first national plan for artificial intelligence aims to upskill workers to boost productivity, but will leave the tech largely unregulated and without its own legislation to operate under.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • ‘End of an era’: Experts warn research executive order could stifle scientific innovation

    ‘End of an era’: Experts warn research executive order could stifle scientific innovation

    An executive order that gives political appointees new oversight for the types of federal grants that are approved could undercut the foundation of scientific research in the U.S., research and higher education experts say. 

    President Donald Trump’s order, signed Aug. 7, directs political appointees at federal agencies to review grant awards to ensure they align with the administration’s “priorities and the national interest.

    These appointees are to avoid giving funding to several types of projects, including those that recognize sex beyond a male-female binary or initiatives that promote “anti-American values,” though the order doesn’t define what those values are.   

    The order effectively codifies the Trump administration’s moves to deny or suddenly terminate research grants that aren’t in line with its priorities, such as projects related to climate change, mRNA research, and diversity, equity and inclusion.

    The executive order’s mandates mark a big departure from norms before the second Trump administration. Previously, career experts provided oversight rather than political appointees and peer review was the core way to evaluate projects.

    Not surprisingly, the move has brought backlash from some quarters.

    The executive order runs counter to the core principle of funding projects based on scientific merit — an idea that has driven science policy in the U.S. since World War II, said Toby Smith, senior vice president for government relations and public policy at the Association of American Universities. 

    “It gives the authority to do what has been happening, which is to overrule peer-review through changes and political priorities,” said Smith. “This is really circumventing peer review in a way that’s not going to advance U.S. science and not be healthy for our country.”

    That could stifle scientific innovation. Trump’s order could prompt scientists to discard their research ideas, not enter the scientific research field or go to another country to complete their work, research experts say. 

    Ultimately, these policies could cause the U.S. to fall from being one of the top countries for scientific research to one near the bottom, said Michael Lubell, a physics professor at the City College of New York.

    “This is the end of an era,” said Lubell. “Even if things settle out, the damage has already been done.”

    A new approach to research oversight

    Under the order, senior political appointees or their designees will review new federal awards as well as ongoingl grants and terminate those that don’t align with the administration’s priorities.

    This policy is a far cry from the research and development strategy developed by Franklin D. Roosevelt’s administration at the end of World War II. Vannevar Bush, who headed the U.S. Office of Scientific Research and Development at the time, decided the U.S. needed a robust national program to fund research that would leave scientists to do their work free from political pressure. 

    Bush’s strategy involved some government oversight over research projects, but it tended to defer to the science community to decide which projects were most promising, Lubell said. 

    “That kind of approach has worked extremely well,” said Lubell. “We have had strong economic growth. We’re the No. 1 military in the world, our work in the scientific field, whether it’s medicine, or IT — we’re right at the forefront.”

    But Trump administration officials, through executive orders and in public hearings, have dismissed some federal research as misleading or unreliable — and portrayed the American scientific enterprise as one in crisis. 

    The Aug. 7 order cited a 2024 report from the U.S. Senate Commerce, Science, and Transportation Committee, led by its then-ranking member and current chairman, Sen. Ted Cruz, R-Texas, that alleged more than a quarter of National Science Foundation spending supported DEI and other “left-wing ideological crusades.” House Democrats, in a report released in April, characterized Cruz’s report as “a sloppy mess” that used flawed methodology and “McCarthyistic tactics.”

    Source link

  • We’re testing preschoolers for giftedness. Experts say that doesn’t work

    We’re testing preschoolers for giftedness. Experts say that doesn’t work

    by Sarah Carr, The Hechinger Report
    October 31, 2025

    When I was a kindergartner in the 1980s, the “gifted” programming for my class could be found inside of a chest. 

    I don’t know what toys and learning materials lived there, since I wasn’t one of the handful of presumably more academically advanced kiddos that my kindergarten teacher invited to open the chest. My distinct impression at the time was that my teacher didn’t think I was worthy of the enrichment because I frequently spilled my chocolate milk at lunch and I had also once forgotten to hang a sheet of paper on the class easel — instead painting an elaborate and detailed picture on the stand itself. The withering look on my teacher’s face after seeing the easel assured me that, gifted, I was not.

    The memory, and the enduring mystery of that chest, resurfaced recently when New York City mayoral front-runner Zohran Mamdani announced that if elected on Nov. 4, he would support ending kindergarten entry to the city’s public school gifted program. While many pundits and parents debated the political fallout of the proposal — the city’s segregated gifted program has for decades been credited with keeping many white and wealthier families in the public school system — I wondered what exactly it means to be a gifted kindergartner. In New York City, the determination is made several months before kindergarten starts, but how good is a screening mechanism for 4-year-olds at predicting academic prowess years down the road? 

    New York is not unique for opting to send kids as young as preschool down an accelerated path, no repeat display of giftedness required. It’s common practice at many private schools to try to measure young children’s academic abilities for admissions purposes. Other communities, including Houston and Miami, start gifted or accelerated programs in public schools as early as kindergarten, according to the National Center for Research on Gifted Education. When I reported on schools in New Orleans 15 years ago, they even had a few gifted prekindergarten programs at highly sought after public schools, which enrolled 4-year-olds whose seemingly stunning intellectual abilities were determined at age 3. It’s more common, however, for gifted programs in the public schools to start between grades 2 and 4, according to the center’s surveys.

    There is an assumption embedded in the persistence of gifted programs for the littles that it’s possible to assess a child’s potential, sometimes before they even start school. New York City has followed a long and winding road in its search for the best way to do this. And after more than five decades, the city’s experience offers a case study in how elusive — and, at times, distracting — that quest remains. 

    Three main strategies are used to assign young children to gifted programs, according to the center. The most common path is cognitive testing, which attempts to rate a child’s intelligence in relation to their peer group. Then there is achievement testing, which is supposed to measure how much and how fast a child is learning in school. And the third strategy is teacher evaluations. Some districts use the three measures in combination with each other.

    For nearly four decades, New York prioritized the first strategy, deploying an ever-evolving array of cognitive and IQ tests on its would-be gifted 4-year-olds — tests that families often signed up for in search of competitive advantage as much as anything else.

    Several years ago, a Brooklyn parent named Christine checked out an open house for a citywide gifted elementary school, knowing her child was likely just shy of the test score needed to get in. (Christine did not want her last name used to protect her daughter’s privacy.) 

    The school required her to show paperwork at the door confirming that her daughter had a relatively high score; and when Christine flashed the proof, the PTA member at the door congratulated her. That and the lack of diversity gave the school an exclusive vibe, Christine recalled. 

    “The resources were incredible,” she said. “The library was huge, there was a room full of blocks. It definitely made me envious, because I knew she was not getting in.” Yet years later, she feels “icky” about even visiting.

    Eishika Ahmed’s parents had opportunities of all kinds in mind when they had her tested for gifted kindergarten nearly two decades ago. Ahmed, now 23, remembers an administrator in a small white room with fluorescent lights asking her which boat in a series of cartoonish pictures was “wide.” The then 4-year-old had no idea. 

    “She didn’t look very pleased with my answer,” Ahmed recalled. She did not get into the kindergarten program.

    Related: Young children have unique needs and providing the right care can be a challenge. Our free early childhood education newsletter tracks the issues. 

    Equity and reliability have been long-running concerns for districts relying on cognitive tests.

    In New York, public school parents in some districts were once able to pay private psychologists to evaluate their children — a permissiveness that led to “a series of alleged abuses,” wrote Norm Fruchter, a now-deceased activist, educator and school board leader in a 2019 article called “The Spoils of Whiteness: New York City’s Gifted and Talented Programs.”

    In New Orleans, there was a similar disparity between the private and public testing of 3-year-olds when I lived and reported on schools there. Families could sit on a waitlist, sometimes for months, to take their children through the free process at the district central office. In 2008, the year I wrote about the issue, only five of the 153 3-year-olds tested by the district met the gifted benchmark. But families could also pay a few hundred dollars and go to a private tester who, over the same time period, identified at least 64 children as gifted. “I don’t know if everybody is paying,” one parent told me at the time, “but it defeats the purpose of a public school if you have to pay $300 to get them in.”

    Even after New York City districts outlawed private testers, concerns persisted about parents paying for pricey and extensive test prep to teach them common words and concepts featured on the tests. Moreover, some researchers have worried about racial and cultural bias in cognitive tests more generally. Critics, Fruchter wrote, had long considered them at least partly to assess knowledge of the “reigning cultural milieu in which test-makers and applicants alike were immersed.”

    Across the country, these concerns have led some schools and districts, including New York City, to shift to “nonverbal tests,” which try to assess innate capacity more than experience and exposure. 

    But those tests haven’t made cognitive testing more equitable, said Betsy McCoach, a professor of psychometrics and quantitative psychology at Fordham University and co-principal investigator at the National Center for Research on Gifted Education.

    “There is no way to take prior experience out of a test,” she said. “I wish we could.” Children who’ve had more exposure to tests, problem-solving and patterns are still going to have an advantage on a nonverbal test, McCoach added. 

    And no test can overcome the fact that for very young children, scores can change significantly from year to year, or even week to week. In 2024, researchers analyzed more than 200 studies on the stability of cognitive abilities at different ages. They found that for 4-year-olds, cognitive test scores are not very predictive of long-term scores — or even, necessarily, short-term ones. 

    There’s not enough stability “to say that if we assess someone at age 4, 5, 6 or 7 that a child would or wouldn’t be well-served by being in a gifted program” for multiple years, said Moritz Breit, the lead author of the study and a post-doctoral researcher in the psychology department at the University of Trier in Germany.

    Scores don’t start to become very consistent until later in elementary school, with stability peaking in late adolescence.

    But for 4-year-olds? “Stability is too low for high-stakes decisions,” he said.

    Eishika Ahmed is just one example of how early testing may not predict future achievement. Even though she did not enroll in the kindergarten gifted program, by third grade she was selected for an accelerated program at her school called “top class.”

    Years later, still struck by the inequity of the whole process, she wrote a 2023 essay for the think tank The Century Foundation about it. “The elementary school a child attends shouldn’t have such significant influence over the trajectory of their entire life,” she wrote. “But for students in New York City public schools, there is a real pipeline effect that extends from kindergarten to college. Students who do not enter the pipeline by attending G&T programs at an early age might not have the opportunity to try again.”

    Partly because of the concerns about cognitive tests, New York City dropped intelligence testing entirely in 2021 and shifted to declaring kindergartners gifted based on prekindergarten teacher recommendations. A recent article in Chalkbeat noted that after ending the testing for the youngest, diversity in the kindergarten gifted program increased: In 2023-24, 30 percent of the children were Black and Latino, compared to just 12 percent in 2020, Chalkbeat reported. Teachers in the programs also describe enrolling a broader range of students, including more neurodivergent ones. 

    The big problem, according to several experts, is that when hundreds of individual prekindergarten teachers evaluate 4-year-olds for giftedness, any consistency in defining it can get lost, even if the teachers are guided on what to look for. 

    “The word is drained of meaning because teachers are not thinking about the same thing,” said Sam Meisels, the founding executive director of the Buffett Early Childhood Institute at the University of Nebraska.

    Breit said that research has found that teacher evaluations and grades for young children are less stable and predictive than the (already unstable) cognitive testing. 

    “People are very bad at looking at another person and inferring a lot about what’s going on under the hood,” he said. “When you say, ‘Cognitive abilities are not stable, let’s switch to something else,’ the problem is that there is nothing else to switch to when the goal is stability. Young children are changing a lot.”

    Related: PROOF POINTS: How do you find a gifted child? 

    No one denies that access to gifted programming has been transformative for countless children. McCoach, the Fordham professor, points out that there should be something more challenging for the children who arrive at kindergarten already reading and doing arithmetic, who can be bored moving at the regular pace.

    In an ideal world, experts say, there would be universal screening for giftedness (which some districts, but not New York, have embraced), using multiple measures in a thoughtful way, and there would be frequent entry — and exit — points for the programs. In the early elementary years, that would look less like separate gifted programming and a lot more like meeting every kid where they are. 

    “The question shouldn’t really be: Are you the ‘Big G’?” said McCoach. “That sounds so permanent and stable. The question should be: Who are the kids who need something more than what we are providing in the curriculum?”

    But in the real world, individualized instruction has frequently proved elusive with underresourced schools, large class sizes and teachers who are tasked with catching up the students who are furthest behind. That persistent struggle has provided advocates of gifted education in the early elementary years with what’s perhaps their most powerful argument in sustaining such programs — but it reminds me of that old adage about treating the symptom rather than the disease. 

    At some point a year or two after kindergarten, I did get the chance to be among the chosen when I was selected for a pull-out program known as BEEP. I have no recollection of how we were picked, how often we met or what we did, apart from a performance the BEEP kids held of St. George and the Dragon. I played St. George and I remember uttering one line, declaring my intent to fight the dragon or die. I also remember vividly how much being in BEEP boosted my confidence in my potential — probably its greatest gift.

    Forty years later, the research is clear that every kid deserves the chance — and not just one — to slay a dragon. “You want to give every child the best opportunity to learn as possible,” said Meisels. But when it comes to separate gifted programming for select early elementary school students, “Is there something out there that says their selection is valid? We don’t have that.” 

    “It seems,” he added, “to be a case of people just fooling themselves with the language.” 

    Contact contributing writer Sarah Carr at [email protected]. 

    This story about gifted education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/were-testing-preschoolers-for-giftedness-experts-say-that-doesnt-work/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113170&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/were-testing-preschoolers-for-giftedness-experts-say-that-doesnt-work/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • The Experts in My Neighborhood – Teaching in Higher Ed

    The Experts in My Neighborhood – Teaching in Higher Ed

    This post is one of many, related to my participation in  Harold Jarche’s Personal Knowledge Mastery workshop.

    The topic of how expertise is no longer valued today is often discussed. I realize that I am walking through well-trodden pathways, as I bring it up in these reflections on experts today. In The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters, Tom Nichols writes:

    These are dangerous times. Never have so many people had access to so much knowledge, and yet been so resistant to learning anything.

    In today’s post, I want to think less about the societal and educational concerns I have about the death of expertise and more about how I might continue to attempt to inculcate habits that can keep me from dying that same death, myself. Part of that practice involves finding and curating many experts to help shape my thinking, over time.

    PKM Roles from Harold Jarche

    For this topic, Jarche invites us to use a map of personal knowledge mastery (PKM) roles to determine where we currently reside and where we would like to go, in terms of our PKM practice. He offers this graphic as part of his Finding Perpetual Beta book:

    On the Y axis, we can sort ourselves into doing high or low amounts of sharing. As I wrote previously, my likelihood of sharing is in direct relation to the topic I’m exploring. However, as Jarche recommended social bookmarking as one way of sharing, perhaps I was selling myself short when I categorized myself as not likely to share anything overly controversial. I have over 35 thousand digital bookmarks on Raindrop.io and add around 10-20 daily. However, I’m more likely to be categorized as highly visible sharing in terms of the Teaching in Higher Ed podcast and the topics I write about on the Teaching in Higher Ed blog.

    On the X axis, our activities are plotted on a continuum more toward high or low sense-making. A prior workshop participant of Jarche’s wrote:

    We must make SENSE of everything we find, and that includes prioritising–recognising what is useful now, what will be useful later, and what may not be useful.

    Given my propensity for saving gazillions of bookmarks and carefully tagging them for future use, combined with my streak of weekly podcast episodes airing since June of 2014, when it comes to teaching and learning, I’m doing a lot of sense-making on the regular.

    These are the (NEW) Experts in My Neighborhood

    Taking inspiration from Sesame Street’s People in Your Neighborhood and from Jarche’s activity related to experts, I offer the following notes on experts. When I searched for people within teaching and learning on Mastodon, I found that I was already following a lot of them. I decided to then look at who people I already follow are following:

    • Ethan Zuckerman – UMass Amherst, Global Voices, Berkman Klein Center. Formerly MIT Media Lab, Geekcorps, Tripod.com
    • Sarah T. Roberts, Ph.D. – Professor, researcher, writer, teacher. I care about content moderation, digital labor, the state of the world. I like animals and synthesizers and games. On the internet since 1993. Mac user since they came out. I like old computers and OSes. I love cooking. Siouxsie is my queen.
      • I was intrigued by her having written a content moderation book called Behind the Screen. I know enough about content moderation to know that I know pretty much nothing about content moderation.
      • She hasn’t posted in a long while, so I’m not sure how much I’ll regularly have ongoing opportunities to see what she’s currently exploring or otherwise working on

    Other Things I Noticed

    As I was exploring who people I follow are connected with on Mastodon, I noticed that you can have multiple pinned posts, unlike other social media I’ve used. Many people have an introduction post pinned to the top of their posts, yet also have other things they want to have front and center. One big advantage to Bluesky to me has been the prevalence of starter packs. The main Mastodon account mentioned an upcoming feature involving “packs” around twenty days ago, but said that they’re not sure what they’ll call the feature.

    Sometimes, scrolling through social media can be depressing. I decided that the next time I’m getting down on Mastodon, I should just check out what’s happening on the compostodon hashtag. It may be the most hopeful hashtag ever.

    The Biggest Delight From the Experience

    Another person who was new to me as an expert on Mastodon was JA Westenberg. According to JA Westenberg’s bio, Joan is a tech writer, angel investor, CMO, Founder. A succinct goal is also included on the about page of JoanWestenberg.com:

    My goal: to think in public.

    As I was winding down my time doing some sensemaking related to experts, I came across a video from Westenberg that was eerily similar to what Jarche has been stressing about us making PKM a practice. I can’t retrace my steps for how I came across Joan’s video on Mastodon, but a video thumbnail quickly caught my eye. Why You Should Write Every Day (Even if You’re Not a Writer) captured my imagination immediately, as I started watching. In addition to the video, there’s a written article of the same title posted, as well.

    As I continue to pursue learning through the PKM workshop, I’m blogging more frequently than I may ever have (at least in the last decade for sure). Reading through Joan’s reactions to the excuses we make when we don’t commit to writing resonate hard. We think we don’t have time. How about realizing we’re not writing War and Peace, Joan teases, gently. Too many of us get the stinking thinking that we don’t have anything good to say or that this comes naturally to people who are more talented and articulate than we are. Joan writes:

    Writing every day is less about becoming someone who writes, and more about becoming someone who thinks.

    Before I conclude this post, I want to be sure to stress the importance I’m gleaning of not thinking of individual experts as the way to practice PKM. Rather, it is through engaging with a community of experts that we will experience the deepest learning. A.J. Jacobs stresses that we should heed his advice:

    Thou shalt pay heed to experts (plural) but be skeptical of any one expert (singular)

    By cultivating many experts whose potential disagreements may help us cultivate a more nuanced perspective on complex topics. When we seek to learn in the complex domain, the importance of intentionality, intellectual humility, and curiosity becomes even more crucial. Having access to a network of experts helps us navigate complexity more effectively.

    Source link

  • Experts knock new Trump plan to collect college admissions data

    Experts knock new Trump plan to collect college admissions data

    President Donald Trump wants to collect more admissions data from colleges and universities to make sure they’re complying with a 2023 Supreme Court decision that ended race-conscious affirmative action. And he wants that data now. 

    But data experts and higher education scholars warn that any new admissions data is likely to be inaccurate, impossible to interpret and ultimately misused by policymakers. That’s because Trump’s own policies have left the statistics agency inside the Education Department with a skeleton staff and not enough money, expertise or time to create this new dataset. 

    The department already collects data on enrollment from every institution of higher education that participates in the federal student loan program. The results are reported through the Integrated Postsecondary Education Data System (IPEDS). But in an Aug. 7 memorandum, Trump directed the Education Department, which he sought to close in March, to expand that task and provide “transparency” into how some 1,700 colleges that do not admit everyone are making their admissions decisions. And he gave Education Secretary Linda McMahon just 120 days to get it done. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Expanding data collection on applicants is not a new idea. The Biden administration had already ordered colleges to start reporting race and ethnicity data to the department this fall in order to track changes in diversity in postsecondary education. But in a separate memorandum to the head of the National Center for Education Statistics (NCES), McMahon asked for even more information, including high school grades and college entrance exam scores, all broken down by race and gender.  

    Bryan Cook, director of higher education policy at the Urban Institute, a think tank in Washington, D.C., called the 120-day timeline “preposterous” because of the enormous technical challenges. For example, IPEDS has never collected high school GPAs. Some schools use a weighted 5.0 scale, giving extra points for advanced classes, and others use an unweighted 4.0 scale, which makes comparisons messy. Other issues are equally thorny. Many schools no longer require applicants to report standardized test scores and some no longer ask them about race so the data that Trump wants doesn’t exist for those colleges. 

    “You’ve got this effort to add these elements without a mechanism with which to vet the new variables, as well as a system for ensuring their proper implementation,” said Cook. “You would almost think that whoever implemented this didn’t know what they were doing.” 

    Cook has helped advise the Education Department on the IPEDS data collection for 20 years and served on technical review panels, which are normally convened first to recommend changes to the data collection. Those panels were disbanded earlier this year, and there isn’t one set up to vet Trump’s new admissions data proposal.

    Cook and other data experts can’t figure out how a decimated education statistics agency could take on this task. All six NCES employees who were involved in IPEDS data collection were fired in March, and there are only three employees left out of 100 at NCES, which is run by an acting commissioner who also has several other jobs. 

    An Education Department official, who did not want to be named, denied that no one left inside the Education Department has IPEDS experience. The official said that staff inside the office of the chief data officer, which is separate from the statistics agency, have a “deep familiarity with IPEDS data, its collection and use.” Former Education Department employees told me that some of these employees have experience in analyzing the data, but not in collecting it.

    In the past, there were as many as a dozen employees who worked closely with RTI International, a scientific research institute, which handles most of the IPEDS data collection work. 

    Technical review eliminated

    Of particular concern is that RTI’s $10 million annual contract to conduct the data collection had been slashed approximately in half by the Department of Government Efficiency, also known as DOGE, according to two former employees, who asked to remain anonymous out of fear of retaliation. Those severe budget cuts eliminated the technical review panels that vet proposed changes to IPEDS, and ended training for colleges and universities to submit data properly, which helped with data quality. RTI did not respond to my request to confirm the cuts or answer questions about the challenges it will face in expanding its work on a reduced budget and staffing.

    The Education Department did not deny that the IPEDS budget had been cut in half. “The RTI contract is focused on the most mission-critical IPEDS activities,” the Education Department official said. “The contract continues to include at least one task under which a technical review panel can be convened.”  

    Additional elements of the IPEDS data collection have also been reduced, including a contract to check data quality.

    Last week, the scope of the new task became more apparent. On Aug. 13, the administration released more details about the new admissions data it wants, describing how the Education Department is attempting to add a whole new survey to IPEDS, called the Admissions and Consumer Transparency Supplement (ACTS), which will disaggregate all admissions data and most student outcome and financial aid data by race and gender. College will have to report on both undergraduate and graduate school admissions. The public has 60 days to comment, and the administration wants colleges to start reporting this data this fall. 

    Complex collection

    Christine Keller, executive director of the Association for Institutional Research, a trade group of higher education officials who collect and analyze data, called the new survey “one of the most complex IPEDS collections ever attempted.” 

    Traditionally, it has taken years to make much smaller changes to IPEDS, and universities are given a year to start collecting the new data before they are required to submit it. (Roughly 6,000 colleges, universities and vocational schools are required to submit data to IPEDS as a condition for their students to take out federal student loans or receive federal Pell Grants. Failure to comply results in fines and the threat of losing access to federal student aid.)

    Normally, the Education Department would reveal screenshots of data fields, showing what colleges would need to enter into the IPEDS computer system. But the department has not done that, and several of the data descriptions are ambiguous. For example, colleges will have to report test scores and GPA by quintile, broken down by race and ethnicity and gender. One interpretation is that a college would have to say how many Black male applicants, for example, scored above the 80th percentile on the SAT or the ACT. Another interpretation is that colleges would need to report the average SAT or ACT score of the top 20 percent of Black male applicants. 

    The Association for Institutional Research used to train college administrators on how to collect and submit data correctly and sort through confusing details — until DOGE eliminated that training. “The absence of comprehensive, federally funded training will only increase institutional burden and risk to data quality,” Keller said. Keller’s organization is now dipping into its own budget to offer a small amount of free IPEDS training to universities

    The Education Department is also requiring colleges to report five years of historical admissions data, broken down into numerous subcategories. Institutions have never been asked to keep data on applicants who didn’t enroll. 

    “It’s incredible they’re asking for five years of prior data,” said Jordan Matsudaira, an economist at American University who worked on education policy in the Biden and Obama administrations. “That will be square in the pandemic years when no one was reporting test scores.”

    ‘Misleading results’

    Matsudaira explained that IPEDS had considered asking colleges for more academic data by race and ethnicity in the past and the Education Department ultimately rejected the proposal. One concern is that slicing and dicing the data into smaller and smaller buckets would mean that there would be too few students and the data would have to be suppressed to protect student privacy. For example, if there were two Native American men in the top 20 percent of SAT scores at one college, many people might be able to guess who they were. And a large amount of suppressed data would make the whole collection less useful.

    Also, small numbers can lead to wacky results. For example, a small college could have only two Hispanic male applicants with very high SAT scores. If both were accepted, that’s a 100 percent admittance rate. If only 200 white women out of 400 with the same test scores were accepted, that would be only a 50 percent admittance rate. On the surface, that can look like both racial and gender discrimination. But it could have been a fluke. Perhaps both of those Hispanic men were athletes and musicians. The following year, the school might reject two different Hispanic male applicants with high test scores but without such impressive extracurriculars. The admissions rate for Hispanic males with high test scores would drop to zero. “You end up with misleading results,” said Matsudaira. 

    Reporting average test scores by race is another big worry. “It feels like a trap to me,” said Matsudaira. “That is mechanically going to give the administration the pretense of claiming that there’s lower standards of admission for Black students relative to white students when you know that’s not at all a correct inference.”

    The statistical issue is that there are more Asian and white students at the very high end of the SAT score distribution, and all those perfect 1600s will pull the average up for these racial groups. (Just like a very tall person will skew the average height of a group.) Even if a college has a high test score threshold that it applies to all racial groups and no one below a 1400 is admitted, the average SAT score for Black students will still be lower than that of white students. (See graphic below.) The only way to avoid this is to purely admit by test score and take only the students with the highest scores. At some highly selective universities, there are enough applicants with a 1600 SAT to fill the entire class. But no institution fills its student body by test scores alone. That could mean overlooking applicants with the potential to be concert pianists, star soccer players or great writers.

    The Average Score Trap

    This graphic by Kirabo Jackson, an economist at Northwestern University, depicts the problem of measuring racial discrimination though average test scores. Even for a university that admits all students above a certain cut score, the average score of one racial group (red) will be higher than the average score of the other group (blue). Source: graphic posted on Bluesky Social by Josh Goodman

    Admissions data is a highly charged political issue. The Biden administration originally spearheaded the collection of college admissions data by race and ethnicity. Democrats wanted to collect this data to show how the nation’s colleges and universities were becoming less diverse with the end of affirmative action. This data is slated to start this fall, following a full technical and procedural review. 

    Now the Trump administration is demanding what was already in the works, and adding a host of new data requirements — without following normal processes. And instead of tracking the declining diversity in higher education, Trump wants to use admissions data to threaten colleges and universities. If the new directive produces bad data that is easy to misinterpret, he may get his wish.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about college admissions data was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Special educators are in short supply at all levels. A cohesive fix is needed, experts say.

    Special educators are in short supply at all levels. A cohesive fix is needed, experts say.

    This audio is auto-generated. Please let us know if you have feedback.

    ALEXANDRIA, Va. — To address chronic shortages of special educators and disability experts, leaders in the field are looking at best practices across early childhood, K-12 and postsecondary to focus on the similar challenges all three levels face in attracting, preparing and retaining special education professionals. 

    The cohesive approach to filling shortages of early interventionists, teachers, administrators, paraprofessionals and specialized instructional support personnel — as well as trying to reverse a decline in teacher education enrollment —  reflects a shared mission to support students with disabilities at all age levels, speakers said July 14 at a legislative summit hosted by the Council for Exceptional Children and the Council of Administrators of Special Education. 

    “Schools are facing a significant shortage of qualified special education teachers — a challenge that directly affects the support and outcomes for students with disabilities,” said Kevin Rubenstein, president of CASE.

    Rubenstein added that finding enough teachers to fill staff vacancies “feels like trying to spot a unicorn,” because it’s “rare but magical.”

    At the start of the 2024-25 school year, 74% of both elementary and middle schools reported difficulty filling special education teacher vacancies with fully certified teachers, according to federal data. Early childhood education is also facing challenges in recruiting and retaining early interventionists.

    At the higher education level, enrollment in teacher preparation programs has plummeted by 45% in one decade, according to CASE.

    Supporting special educators

    Developing a comprehensive special educator pipeline can better support teacher prep activities so future educators can eventually help boost outcomes for students at all levels, speakers said.

    According to Amanda Schwartz, associate project director of the Maryland Early EdCorp Apprenticeship Program at the University of Maryland, some solutions to recruiting and retaining early interventionists include: boosting salaries, reducing teacher-student ratios, and training on high standards for early intervention services. 

    Recruiting and retaining qualified early interventionists is critical to children’s development, Schwartz said. “We want our teachers to have all this content in order to be able to deliver appropriate practice in classrooms,” she said. 

    David Krantz, executive director of special education at Michigan’s Saginaw Intermediate School District, said it’s helpful to have robust data that can pinpoint where there are staffing struggles. 

    Krantz then pointed to specific ways districts can attract and retain paraprofessionals who support special educators in the classroom. For starters, he said, paraprofessionals need to know their work matters.

    “If people don’t feel valued in their service, they’re going to leave,” Krantz said.

    The Michigan Association of Administrators of Special Education started a paraeducator learning series in January to provide professional development and other support to paraprofessionals. About 350 paraprofessionals have participated so far, Krantz said.

    In the higher education field, Kyena Cornelius, an education professor at the University of Florida, put it bluntly: “Our supply pipeline is broken.” 

    While alternative pathways to the teaching profession have grown, those programs often don’t provide the depth of training into teaching pedagogy or disability-specific knowledge needed, she said. The alternative pathways, Cornelius added, were never meant to replace traditional teacher preparation programs

    She highlighted CEC’s professional standards for special educators as a blueprint for the knowledge and skills teachers need so they are ready to serve students with disabilities and stay in the profession. 

    “We need to think about how we can not only attract and retain but how we can comprehensively prepare teachers in an affordable way, how we can make it attractable and get them the skills,” Cornelius said.

    Source link

  • Experts Weigh In on “Everyone” Cheating in College

    Experts Weigh In on “Everyone” Cheating in College

    Is something in the water—or, more appropriately, in the algorithm? Cheating—while nothing new, even in the age of generative artificial intelligence—seems to be having a moment, from the New York magazine article about “everyone” ChatGPTing their way through college to Columbia University suspending a student who created an AI tool to cheat on “everything” and viral faculty social media posts like this one: “I just failed a student for submitting an AI-written research paper, and she sent me an obviously AI-written email apologizing, asking if there is anything she can do to improve her grade. We are through the looking-glass, folks.”

    It’s impossible to get a true read on the situation by virality alone, as the zeitgeist is self-amplifying. Case in point: The suspended Columbia student, Chungin “Roy” Lee is a main character in the New York magazine piece. Student self-reports of AI use may also be unreliable: According to Educause’s recent Students and Technology Report, some 43 percent of students surveyed said they do not use AI in their coursework; 5 percent said they use AI to generate material that they edit before submitting, and just 1 percent said they submit generated material without editing it.

    There are certainly students who do not use generative AI and students who question faculty use of AI—and myriad ways that students can use generative AI to support their learning and not cheat. But the student data paints a different picture than the one presidents, provosts, deans and other senior leaders did in a recent survey by the American Association of Colleges and Universities and Elon University: Some 59 percent said cheating has increased since generative AI tools have become widely available, with 21 percent noting a significant increase—and 54 percent do not think their institution’s faculty are effective in recognizing generative Al–created content.

    In Inside Higher Ed’s 2025 Survey of Campus Chief Technology/Information Officers, released earlier this month, no CTO said that generative AI has proven to be an extreme risk to academic integrity at their institution. But most—three in four—said that it has proven to be a moderate (59 percent) or significant (15 percent) risk. This is the first time the annual survey with Hanover Research asked how concerns about academic integrity have actually borne out: Last year, six in 10 CTOs expressed some degree of concern about the risk generative AI posed to academic integrity.

    Stephen Cicirelli, the lecturer of English at Saint Peter’s University whose “looking glass” post was liked 156,000 times in 24 hours last week, told Inside Higher Ed that cheating has “definitely” gotten more pervasive within the last semester. But whether it’s suddenly gotten worse or has been steadily growing since large language models were introduced to the masses in late 2022, one thing is clear: AI-assisted cheating is a problem, and it won’t get better on its own.

    So what can institutions do about it? Drawing on some additional insights from the CTO survey and advice from other experts, we’ve compiled a list of suggestions below. The expert insights, in particular, are varied. But a unifying theme is that cheating in the age of generative AI is as much a problem requiring intervention as it is a mirror—one reflecting larger challenges and opportunities within higher education.

    (Note: AI detection tools did not make this particular list. Even though they have fans among the faculty, who tend to point out that some tools are more accurate than others, such tools remain polarizing and not entirely foolproof. Similarly, banning generative AI in the classroom did not make the list, though this may still be a widespread practice: 52 percent of students in the Educause survey said that most or all of their instructors prohibit the use of AI.)

    Academic Integrity for Students

    The American Association of Colleges and Universities and Elon University this month released the 2025 Student Guide to Artificial Intelligence under a Creative Commons license. The guide covers AI ethics, academic integrity and AI, career plans for the AI age, and an AI toolbox. It encourages students to use AI responsibly, critically assess its influence and join conversations about its future. The guide’s seven core principles are:

    1. Know and follow your college’s rules
    2. Learn about AI
    3. Do the right thing
    4. Think beyond your major
    5. Commit to lifelong learning
    6. Prioritize privacy and security
    7. Cultivate your human abilities

    Connie Ledoux Book, president of Elon, told Inside Higher Ed that the university sought to make ethics a central part of the student guide, with campus AI integration discussions revealing student support for “open and transparent dialogue about the use of AI.” Students “also bear a great deal of responsibility,” she said. They “told us they don’t like it when their peers use AI to gain unfair advantages on assignments. They want faculty to be crystal clear in their syllabi about when and how AI tools can be used.”

    Now is a “defining moment for higher education leadership—not only to respond to AI, but to shape a future where academic integrity and technological innovation go hand in hand,” Book added. “Institutions must lead with clarity, consistency and care to prepare students for a world where ethical AI use is a professional expectation, not just a classroom rule.”

    Mirror Logic

    Lead from the top on AI. In Inside Higher Ed’s recent survey, just 11 percent of CTOs said their institution has a comprehensive AI strategy, and roughly one in three CTOs (35 percent) at least somewhat agreed that their institution is handling the rise of AI adeptly. The sample size for the survey is 108 CTOs—relatively small—but those who said their institution is handling the rise of AI adeptly were more likely than the group over all to say that senior leaders at their institution are engaged in AI discussions and that effective channels exist between IT and academic affairs for communication on AI policy and other issues (both 92 percent).

    Additionally, CTOs who said that generative AI had proven to be a low to nonexistent risk to academic integrity were more likely to report having some kind of institutionwide policy or policies governing the use of AI than were CTOs who reported a moderate or significant risk (81 percent versus 64 percent, respectively). Leading on AI can mean granting students institutional access to AI tools, the rollout of which often includes larger AI literacy efforts.

    (Re)define cheating. Lee Rainie, director of the Imagining the Digital Future Center at Elon, said, “The first thing to tackle is the very definition of cheating itself. What constitutes legitimate use of AI and what is out of bounds?” In the AAC&U and Elon survey that Rainie co-led, for example, “there was strong evidence that the definitional issues are not entirely resolved,” even among top academic administrators. Leaders didn’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—“the verdict was completely split,” Rainie said. Clearly, it’s “a perfect recipe for confusion and miscommunication.”

    Rainie’s additional action items, with implications for all areas of the institution:

    1. Create clear guidelines for appropriate and inappropriate use of AI throughout the university.
    2. Include in the academic code of conduct a “broad statement about the institution’s general position on AI and its place in teaching and learning,” allowing for a “spectrum” of faculty positions on AI.
    3. Promote faculty and student clarity as to the “rules of the road in assignments.”
    4. Establish “protocols of proof” that students can use to demonstrate they did the work.

    Rainie suggested that CTOs, in particular, might be useful regarding this last point, as such proof could include watermarking content, creating NFTs and more.

    Put it in the syllabus! (And in the institutional DNA.) Melik Khoury, president and CEO of Unity Environmental University in Maine, who’s publicly shared his thoughts on “leadership in an intelligent era of AI,” including how he uses generative AI, told Inside Higher Ed that “AI is not cheating. What is cheating is our unwillingness to rethink outdated assessment models while expecting students to operate in a completely transformed world. We are just beginning to tackle that ourselves, and it will take time. But at least we are starting from a position of ‘We need to adapt as an institution,’ and we are hiring learning designers to help our subject matter experts adapt to the future of learning.”

    As for students, Khoury said the university has been explicit “about what AI is capable of and what it doesn’t do as well or as reliably” and encourages them to recognize their “agency and responsibility.” Here’s an excerpt of language that Khoury said appears in every course syllabus:

    • “You are accountable for ensuring the accuracy of factual statements and citations produced by generative AI. Therefore, you should review and verify all such information prior to submitting any assignment.
    • “Remember that many assignments require you to use in-text citations to acknowledge the origin of ideas. It is your responsibility to include these citations and to verify their source and appropriateness.
    • “You are accountable for ensuring that all work submitted is free from plagiarism, including content generated with AI assistance.
    • “Do not list generative AI as a co-author of your work. You alone are responsible.”

    Additional policy language recommends that students:

    • Acknowledge use of generative AI for course submissions.
    • Disclose the full extent of how and where they used generative AI in the assignment.
    • Retain a complete transcript of generative AI usage (including source and date stamp).

    “We assume that students will use AI. We suggest constructive ways they might use it for certain tasks,” Khoury said. “But, significantly, we design tasks that cannot be satisfactorily completed without student engagement beyond producing a response or [just] finding the right answer—something that AI can do for them very easily.”

    In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college.”

    —Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi

    Design courses with and for AI. Keith Quesenberry, professor of marketing at Messiah University in Pennsylvania, said he thinks less about cheating, which can create an “adversarial me-versus-them dynamic,” and more about pedagogy. This has meant wrestling with a common criticism of higher education—that it’s not preparing students for the world of work in the age of AI—and the reality that no one’s quite sure what that future will look like. Quesenberry said he ended up spending all of last summer trying to figure out how “a marketer should and shouldn’t use AI,” creating and testing frameworks, ultimately vetting his own courses’ assignments: “I added detailed instructions for how and how not to use AI specifically for that assignment’s tasks or requirements. I also explain why, such as considering whether marketing materials can be copyrighted for your company or client. I give them guidance on how to cite their AI use.” He also created a specialized chat bot to which students can upload approved resources to act as an AI tutor.

    Quesenberry also talks to students about learning with AI “from the perspective of obtaining a job.” That is, students need a foundation of disciplinary knowledge on which to create AI prompts and judge output. And they can’t rely on generative AI to speak or think for them during interviews, networking and with clients.

    There are “a lot of professors quietly working very hard to integrate AI into their courses and programs that benefit their disciplines and students,” he adds. One thing that would help them, in Quesenberry’s view? Faculty institutional access to the most advanced AI tools.

    Give faculty time and training. Tricia Bertram Gallant, director of the academic integrity office and Triton Testing Center at the University of California, San Diego, and co-author of the new book The Opposite of Cheating: Teaching for Integrity in the Age of AI (University of Oklahoma Press), said that cheating part of human nature—and that faculty need time, training and support to “design educational environments that make cheating the exception and integrity the norm” in this new era of generative AI.

    Faculty “cannot be expected to rebuild the plane while flying it,” she said. “They need course release time to redesign that same course, or they need a summer stipend. They also need the help of those trained in pedagogy, assessment design and instructional design, as most faculty did not receive that training while completing their Ph.D.s.” Gallant also floated the idea of AI fellows, or disciplinary faculty peers who are trained on how to use generative AI in the classroom and then to “share, coach and mentor their peers.”

    Students, meanwhile, need training in AI literacy, “which includes how to determine if they’re using it ethically or unethically. Students are confused, and they’re also facing immense temptations and opportunities to cognitively offload to these tools,” Gallant added.

    Teach first-year students about AI literacy. Chris Ostro, an assistant teaching professor and instructional designer focused on AI at the University of Colorado at Boulder, offers professional development on his “mosaic approach” to writing in the classroom—which includes having students sign a standardized disclosure form about how and where they’ve used AI in their assignments. He told Inside Higher Ed that he’s redesigned his own first-year writing course to address AI literacy, but he is concerned about students across higher education who may never get such explicit instruction. For that reason, he thinks there should be mandatory first-year classes for all students about AI and ethics. “This could also serve as a level-setting opportunity,” he said, referring to “tech gaps,” or the effects of the larger digital divide on incoming students.

    Regarding student readiness, Ostro also said that most of the “unethical” AI use by students is “a form of self-treatment for the huge and pervasive learning deficits many students have from the pandemic.” One student he recently flagged for possible cheating, for example, had largely written an essay on her own but then ran it through a large language model, prompting it to make the paper more polished. This kind of use arguably reflects some students’ lack of confidence in their writing skills, not an outright desire to offload the difficult and necessary work of writing to think critically.

    Think about grading (and why students cheat in the first place). Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi, co-wrote an essay two years ago with two students about why students cheat. They said much of it came down to an overemphasis on grades: “Students are more likely to engage in academic dishonesty when their focus, or the perceived focus of the class, is on grading.” The piece proposed the following solutions, inspired by the larger trend of ungrading:

    1. Allow students to reattempt or revise their work.
    2. Refocus on formative feedback to improve rather than summative feedback to evaluate.
    3. Incorporate self-assessment.

    Donahoe said last week, “I stand by every claim that we make in the 2023 piece—and it all feels heightened two years later.” The problems with AI misuse “have become more acute, and between this and the larger sociopolitical climate, instructors are reaching unsustainable levels of burnout. The actions we recommend at the end of the piece remain good starting points, but they are by no means solutions to the big, complex problem we’re facing.”

    Framing cheating as a structural issue, Donahoe said students have been “conditioned to see education as a transaction, a series of tokens to be exchanged for a credential, which can then be exchanged for a high-paying job—in an economy where such jobs are harder and harder to come by.” And it’s hard to fault students for that view, she continued, as they receive little messaging to the contrary.

    Like the problem, the solution set is structural, Donahoe explained: “In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college. Smaller class sizes in which students and teachers can form real relationships; more time, training and support for instructors; fundamental changes to how we grade and how we think about grades; more public funding for education so that we can make these things happen.”

    With none of this apparently forthcoming, faculty can at least help reorient students’ ideas about school and try to “harness their motivation to learn.”

    Source link

  • 6 higher education experts reflect on COVID’s sectorwide influence

    6 higher education experts reflect on COVID’s sectorwide influence

    In March 2020, the World Health Organization declared COVID-19 a global pandemic, grinding life to a halt and severely disrupting instruction across higher education. Colleges are still feeling the effects of the virus five years later.

    We asked higher education experts to look back at the changes made and how the pandemic continues to shape the sector today.

    Their written responses are below, lightly edited for brevity and clarity.

    Chief content officer at Coursera

    Marni Baker Stein

    Permission granted by Caroline Bresler

    The pandemic made online learning mainstream in ways that were unimaginable in 2019. A global generation of learners who would likely have not experienced the online classroom now understand its potential, pitfalls, and power. While online learning’s ubiquity didn’t last, its impact on student preferences and university strategy remains. For learners, Coursera research shows that a clear majority of students now want their universities to deliver short-form, job-relevant, for-credit content, delivered digitally. Universities have had to respond to remain attractive, with an increase in micro-credential adoption, and further plans to accelerate uptake among university leaders. Without the economic pressures created by the pandemic, and the exposure to online learning it accelerated, both demand and uptake would have been slower and less pronounced than we see today.

    Back to top

    CEO at National Association for College Admission Counseling

    Angel Pérez

    Permission granted by Melanie Marquez Parra

    We can’t talk about the impact of the pandemic in isolation — multiple converging factors have created a perfect storm for higher education. During the pandemic, we lost over a million students from the college pipeline — a loss the sector has yet to recover from. That blow, compounded by the ongoing FAFSA crisis, demographic shifts, and rising anti-higher education rhetoric, continues to destabilize institutions. Adding to the strain, executive orders and Dear Colleague letters coming out of Washington, D.C., are making it harder for colleges to move forward. Higher education is not just recovering — it’s fighting to remain relevant, accessible, and resilient.

    Back to top

    Vice president for policy analysis and research at the Western Interstate Commission for Higher Education

    Patrick Lane

    Permission granted by Patrick Lane

    Five years after the pandemic started, data shows that there wasn’t a major impact on high school graduate numbers, though there may be about 1% fewer graduates in the future than previously projected. Whether these students choose to enroll in higher education at the same rates as they did in the past is a different question as the pandemic itself seems to have made some students less likely to pursue higher education. The bigger impact may come from learning loss and chronic absenteeism in K-12. Students who were in early grades when COVID started are facing uphill battles and probably will not be able to make up that ground by the time they finish high school. Postsecondary education (along with employers) will have to grapple with this challenge — on top of overall changing demographics – for years to come. But there are options, including doubling down on developmental ed redesign, enhanced advising, and simplifying postsecondary pathways (among others). 

    Back to top

    Executive director of Commonfund Institute

    George Suttles

    Permission granted by Chandler Stearns

    The pandemic forced colleges and universities to rapidly adopt online platforms for teaching and learning. The shift to remote learning has led to the widespread use of fully remote and hybrid models, combining in-person and online education. Relatedly, the pandemic exacerbated existing inequalities amongst student populations across the country. For example, students from low-income backgrounds faced greater challenges due to housing insecurity, lack of internet access, and limited access to technology. As we continue to learn lessons from the pandemic, it will be important to further leverage technology to enhance teaching and learning, while at the same time taking care of students across the socio-economic spectrum, recognizing that the student experience is just a part of their entire lived experience.

    Back to top

    Head of the Department of Educational Leadership and Policy Studies at University of Tennessee, Knoxville

    Robert Kelchen

    Permission granted by Robert Kelchen

    A key lesson that higher education leaders remember from the early days of the pandemic is that cash is king. Colleges that had financial flexibility were able to avoid layoffs and budget cuts, while institutions that were unable to access funds had to make painful cuts that permanently scarred their communities. The financial state of American higher education is more uncertain right now than even in the darkest days of March 2020, and colleges are starting to implement cost-cutting measures in order to avoid having to make even more difficult decisions down the road.

    Back to top

    Executive director of WCET

    Van Davis

    Permission granted by Melanie Sidwell

    Even before the pivot to emergency remote instruction, the number of students enrolled in at least one distance education course was steadily rising. If you look at IPEDS data, that number has only accelerated since the pandemic. Many students, and some faculty, discovered that they liked the flexibility and opportunities that asynchronous distance education affords and have continued to enroll in that course modality. Institutions that offered very little distance education now find themselves responding to student demand and increasing their offerings. For many institutions, distance education is now a strategic part of their course offerings.

    Back to top

    Source link

  • Parents, Medical Providers, Vaccine Experts Brace for RFK Jr.’s HHS Takeover – The 74

    Parents, Medical Providers, Vaccine Experts Brace for RFK Jr.’s HHS Takeover – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    While Robert F. Kennedy Jr. ‘s Senate confirmation to head the Department of Health and Human Services was not unexpected, it still shook medical providers, public health experts and parents across the country. 

    Mary Koslap-Petraco, a pediatric nurse practitioner who exclusively treats underserved children, said when she heard the news Thursday morning she was immediately filled with “absolute dread.”

    Mary Koslap-Petraco is a pediatric nurse practitioner and Vaccines for Children provider. (Mary Koslap-Petraco)

    “I have been following him for years,” she told The 74. “I’ve read what he has written. I’ve heard what he has said. I know he has made a fortune with his anti-vax stance.”

    She is primarily concerned that his rhetoric might “scare the daylights out of people so that they don’t want to vaccinate their children.” She also fears he could move to defund Vaccines for Children, a program under the Centers for Disease Control and Prevention that provides vaccines to kids who lack health insurance or otherwise wouldn’t be able to afford them. While the program is federally mandated by Congress, moves to drain its funding could essentially render it useless.

    Koslap-Petraco’s practice in Massapequa Park, New York relies heavily on the program to vaccinate pediatric patients, she said. If it were to disappear, she asked, “How am I supposed to take care of poor children? Are they supposed to just die or get sick because their parents don’t have the funds to get the vaccines for them?” 

    And, if the government-run program were to stop paying for vaccines, she said she’s terrified private insurance companies might follow suit. 

    Vaccines for Children is “the backbone of pediatric vaccine infrastructure in the country,” said Richard Hughes IV, former vice president of public policy at Moderna and a George Washington University law professor who teaches a course on vaccine law.

    Kennedy will also have immense power over Medicaid, which covers low-income populations and provides billions of dollars to schools annually for physical, mental and behavioral health services for eligible students.

    If Kennedy moves to weaken programs at HHS, which experts expect him to do, through across-the-board cuts in public health funding that trickle down to immunization programs or more targeted attacks, low-income and minority school-aged kids will be disproportionately impacted, Hughes said. 

    “I just absolutely, fundamentally, confidently believe that we will see deaths,” he added.

    Anticipating chaos and instability

    Following a contentious seven hours of grilling across two confirmation hearings, Democratic senators protested Kennedy’s confirmation on the floor late into the night Wednesday. The following morning, all 45 Democrats and both Independents voted in opposition and all but one Republican — childhood polio survivor Mitch McConnell of Kentucky — lined up behind President Donald Trump’s pick.

    James Hodge, a public health law expert at Arizona State University’s Sandra Day O’Connor College of Law, said that while it was good to see senators across the political spectrum asking tough questions and Kennedy offering up some concessions on vaccine-related policies and initiatives, he’s skeptical these will stick.

    “Whatever you’ve seen him do for the last 25 to 30 years is a much, much greater predictor than what you saw him do during two or three days of Senate confirmation proceedings,” Hodge said. “Ergo, be concerned significantly about the future of vaccines, vaccine exemptions, [and] how we’re going to fund these things.”

    Hodge also said he doesn’t trust how Kennedy will respond to the consequences of a dropoff in childhood vaccines, pointing to the current measles outbreak in West Texas schools.

    “The simple reality is he may plant misinformation or mis-messaging,” he said.

    During his confirmation hearings, Kennedy tried to distance himself from his past anti-vaccination sentiments stating, “News reports have claimed that I am anti-vaccine or anti-industry. I am neither. I am pro-safety … I believe that vaccines played a critical role in health care. All of my kids are vaccinated.”

    He was confirmed as Linda McMahon, Trump’s nominee to head the Department of Education, was sitting down for her first day of hearings. At one point that morning, McMahon signaled an openness to possibly shifting enforcement to HHS of the Individuals with Disabilities Education Act — a federal law dating back to 1975 that mandates a free, appropriate public education for the 7.5 million students with disabilities — if Trump were to succeed in shutting down the education department.

    This would effectively put IDEA’s $15.4 billion budget under Kennedy’s purview, further linking the education and public health care systems.

    In a post on the social media site BlueSky, Randi Weingarten, president of the American Federation of Teachers, wrote she is “concerned that anyone is willing to move IDEA services for kids with disabilities into HHS, under a secretary who questions science.”

    Keri Rodrigues, president of the National Parents Union and a parent of a child with ADHD and autism, told The 74 the idea was “absolutely absurd” and would cause chaos and instability. 

    Kennedy’s history of falsely asserting a link between childhood vaccines and autism — a disability included under IDEA coverage — is particularly concerning to experts in this light.

    “You obviously have a contingent of kids who are beneficiaries of IDEA that are navigating autism spectrum disorder,” said Hughes, “Could [we] potentially see some sort of policy activity and rhetoric around that? Potentially.”

    Vaccines — and therefore HHS — are inextricably linked to schools. Currently, all 50 states have vaccine requirements for children entering child care and schools. But Kennedy, who now has control of an agency with a $1.7 trillion budget and 90,000 employees spread across 13 agencies, could pull multiple levers to roll back requirements, enforcements and funding, according to The 74’s previous reporting. And Trump has signaled an interest in cutting funding to schools that mandate vaccines.

    “There’s a certain percentage of the population that is focused on removing school entry requirements,” said Northe Saunders, executive director of the pro-vaccine SAFE Communities Coalition. “They are loud, and they are organized and they are well funded by groups just like RFK Jr.’s Children’s Health Defense.”

    Kennedy will also have the ability to influence the makeup of the committees that approve vaccines and add them to the federal vaccine schedule, which state legislators rely on to determine their school policies. Hodge said one of these committees is already being “re-organized and re-thought as we speak.”

    “With him now in place, just expect that committee to start really changing its members, its tone, the demeanor, the forcefulness of which it’s suggesting vaccines,” he added.

    Hughes, the law professor, said he is preparing for mass staffing changes throughout the agency, mirroring what’s already happened across multiple federal departments and agencies in Trump’s first weeks in office. He predicts this will include Kennedy possibly asking for the resignations “of all scientific leaders with HHS.” 

    Kennedy appeared to confirm that he was eyeing staffing cuts Thursday night during an appearance on Fox News’s “The Ingraham Angle.”

    “I have a list in my head … if you’ve been involved in good science, you have got nothing to worry about,” Kennedy said.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link