Tag: Publishers

  • Publishers Adopt AI Tools to Bolster Research Integrity

    Publishers Adopt AI Tools to Bolster Research Integrity

    The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publish—and peer review—research findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.

    Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journals—many have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.

    But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.

    “These AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,” Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. “AI tools can also be used to generate content and potentially increase research integrity risk. That’s why we’ve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.”

    However, most scholars aren’t yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60 percent said lack of guidelines and training keep them from using it in their work.

    In response, Wiley released new guidelines last week on “responsible and effective” uses of AI, aimed at deploying the technology to make the publishing process more efficient “while preserving the author’s authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,” according to a news release.

    Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates “precise summaries” to alleviate researchers’ challenges of “information overload, a shortage of time and the need for more effective ways to enhance existing knowledge,” according to a news release.

    Both of those announcements followed Springer Nature’s January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.

    “As the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,” Harsh Jegadeesan, Springer’s chief publishing officer, said in a news release. “By carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to them—conducting research.”

    ‘Obvious Financial Benefit’

    Academic publishing experts believe there are both advantages—and down sides—of involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.

    If use of AI assistants becomes the norm for peer reviewers, “the volume problem would be immediately gone from the industry” while creating an “obvious financial benefit” for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.

    But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could be—and may already be being—used to target terms or research lawmakers don’t like.

    “There are parts of peer review where a machine is definitely better than a human brain,” Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. “My concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents … That can become an element of censorship.”

    Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.

    “Just as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,” she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. “In the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.”

    But that doesn’t mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she “felt extra pressure to make sure that everything the author was reporting sounds doable” during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers “handling a large volume of submissions.”

    Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after “rigorous” testing.

    “The large language models are only as good as the information you give them,” she said. “We are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning … That’s the only way to get more successful and sustainable outcomes.”

    Not Equipped to Ensure Quality?

    Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “Anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing,” and “whether AI can do that effectively is a reasonable question.”

    But beyond that, the publishing industry’s embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.

    “The fact that publishers are now trumpeting the fact that they both are and need to be—according to them—using AI to fight paper mills and other bad actors is a bit of an admission they hadn’t been willing to make until recently: Their systems are not actually equipped to ensure quality,” Oransky said.

    “This is just more evidence that people are trying to shove far too much through the peer-review system,” he added. “That wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”

    Source link