Tag: Bolster

  • Institutions, Over Peers and Tribes, Bolster Indigenous Student Belonging

    Institutions, Over Peers and Tribes, Bolster Indigenous Student Belonging

    Daniel de la Hoz/iStock/Getty Images Plus

    A new study from the American Indian College Fund and National Native Scholarship Providers found that Indigenous students report a stronger sense of belonging on campus when their college provides “perceptions of a sense of acceptance, inclusion and identity.”

    They call this “institutional support,” and it’s the primary predictor of belonging, trailed by peer support, campus climate and tribal support, the study showed. 

    The “Power in Culture Report,” released Wednesday, examined Indigenous students’ sense of belonging at the institutional and state level. NNSP surveyed more than 560 students enrolled at 184 institutions across multiple sectors, including tribal colleges and universities, predominantly white institutions, Hispanic-serving institutions, and other minority-serving institutions. The survey was conducted between March and April of 2024. 

    Unsurprisingly, tribal colleges foster a greater sense of institutional belonging among Indigenous students than other institution types. At nontribal institutions, Indigenous students must create belonging via “informal networks and cultural resilience amid institutional neglect or performative inclusion.” Indigenous students at nontribal campuses also report experiencing more microaggressions and cultural isolation. Students at institutions with larger populations of Indigenous students report a 14 percent higher sense of belonging than those at schools with fewer Native peers. 

    When looking at Indigenous student belonging at the state level, students attending college in states with larger tribal populations actually report a lower sense of belonging and say they feel less supported than students in states with smaller tribal populations, “suggesting that population size alone does not equate to meaningful support,” the study noted. Students in states with a tribal college or university reported an 18 percent lower sense of belonging than students in states without a tribal institution. 

    At all institution types, students living off-campus reported a 16 percent higher sense of belonging than those living on-campus. 

    The report includes several policy recommendations to bolster Indigenous student belonging, including recruiting Indigenous faculty and staff, funding Native language revitalization courses, and establishing meaningful relationships with local tribal nations.

    Source link

  • Education Department uses Skrmetti case to bolster Title IX policy

    Education Department uses Skrmetti case to bolster Title IX policy

    Just a week after the U.S. Supreme Court ruled to restrict gender-affirming medical care for transgender minors in June, the U.S. Department of Education began citing that decision in findings related to transgender access to athletics. 

    Although the high court’s ruling in U.S. vs. Skrmetti did not directly involve education civil rights law, the Trump administration has relied on it to bolster its stance that Title IX can be used to exclude transgender students from teams aligning with their gender identities.

    The Supreme Court’s decision said a person’s identification as “transgender” is distinct from their “biological sex.” However, it did not touch on whether discrimination against transgender people amounts to sex-based discrimination.

    But the Education Department’s Office of Civil Rights is using the decision to inform Title IX cases that have excluded transgender students from protections against sex-based discrimination. The decision’s use in OCR policy is leading to double-takes from Title IX experts, although one said district leaders may not have to change anything for now since the Supreme Court has placed a transgender athletics case on its docket for the next term.

    The Trump administration has cited the Skrmetti case in at least two OCR cases related to transgender access to athletics. 

    In a June 25 press release, OCR cited the case in its finding that the California Department of Education and California Interscholastic Federation violated Title IX by discriminating against girls and women after the state allowed transgender students to play on girls’ sports teams.

    “On June 18, 2025, the Supreme Court upheld a Tennessee law banning certain medical care for minors related to treating ‘gender dysphoria, gender identity disorder, or gender incongruence,’” OCR said in its news release. “In so holding, the Supreme Court acknowledged that a person’s identification as ‘transgender’ is distinct from a person’s ‘biological sex.’” 

    The department also cited the case in its July 27 finding that five large Northern Virginia school districts, including Fairfax County Public Schools, discriminated on the basis of sex when they allowed transgender students to access facilities aligning with their gender identities.

    “There has been a little bit of a selective stretching,” said Kayleigh Baker, an advisory board member for the Association of Title IX Administrators. Baker and other ATIXA attorneys routinely work with school districts to train them on education civil rights laws. 

    “The four corners of the Supreme Court opinions have sort of been extrapolated and sort of merged together with this administration’s interpretation in a couple of arenas. And it seems like this is another one of those,” Baker said. 

    Jay Worona, partner at law firm Jaspan Schlesinger Narendran, said the Education Department did something similar with the Supreme Court’s 2023 SFFA v. Harvard decision banning race-conscious admissions. 

    Worona said in an email that the administration has used the case to argue that “K-12 school districts violate civil rights protections of students when they enact policies and engage in practices advancing DEI [diversity, equity and inclusion] despite the Supreme Court’s decision in that case only applying to higher education institutions.” 

    In February, the agency issued a Dear Colleague letter to prohibit the consideration of race in many more aspects of educational programming, including “financial aid, scholarships, prizes, administrative support, discipline, housing, graduation ceremonies, and all other aspects of student, academic, and campus life.” 

    “Although SFFA addressed admissions decisions, the Supreme Court’s holding applies more broadly,” the Education Department said in its letter to districts. “At its core, the test is simple: If an educational institution treats a person of one race differently than it treats another person because of that person’s race, the educational institution violates the law.” 

    Source link

  • Publishers Adopt AI Tools to Bolster Research Integrity

    Publishers Adopt AI Tools to Bolster Research Integrity

    The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publish—and peer review—research findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.

    Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journals—many have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.

    But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.

    “These AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,” Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. “AI tools can also be used to generate content and potentially increase research integrity risk. That’s why we’ve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.”

    However, most scholars aren’t yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60 percent said lack of guidelines and training keep them from using it in their work.

    In response, Wiley released new guidelines last week on “responsible and effective” uses of AI, aimed at deploying the technology to make the publishing process more efficient “while preserving the author’s authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,” according to a news release.

    Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates “precise summaries” to alleviate researchers’ challenges of “information overload, a shortage of time and the need for more effective ways to enhance existing knowledge,” according to a news release.

    Both of those announcements followed Springer Nature’s January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.

    “As the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,” Harsh Jegadeesan, Springer’s chief publishing officer, said in a news release. “By carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to them—conducting research.”

    ‘Obvious Financial Benefit’

    Academic publishing experts believe there are both advantages—and down sides—of involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.

    If use of AI assistants becomes the norm for peer reviewers, “the volume problem would be immediately gone from the industry” while creating an “obvious financial benefit” for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.

    But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could be—and may already be being—used to target terms or research lawmakers don’t like.

    “There are parts of peer review where a machine is definitely better than a human brain,” Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. “My concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents … That can become an element of censorship.”

    Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.

    “Just as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,” she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. “In the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.”

    But that doesn’t mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she “felt extra pressure to make sure that everything the author was reporting sounds doable” during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers “handling a large volume of submissions.”

    Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after “rigorous” testing.

    “The large language models are only as good as the information you give them,” she said. “We are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning … That’s the only way to get more successful and sustainable outcomes.”

    Not Equipped to Ensure Quality?

    Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “Anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing,” and “whether AI can do that effectively is a reasonable question.”

    But beyond that, the publishing industry’s embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.

    “The fact that publishers are now trumpeting the fact that they both are and need to be—according to them—using AI to fight paper mills and other bad actors is a bit of an admission they hadn’t been willing to make until recently: Their systems are not actually equipped to ensure quality,” Oransky said.

    “This is just more evidence that people are trying to shove far too much through the peer-review system,” he added. “That wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”

    Source link