Tag: Integrity

  • How prize named in honour of Tracey Bretag shows academic integrity is changing

    How prize named in honour of Tracey Bretag shows academic integrity is changing

    ***HEPI and the UPP Foundation will host a free webinar on 4 June at 1pm on service learning, how universities can integrate community service with academic studies. Register for your place here.***

    • This HEPI blog was authored by Isabelle Bristow, Managing Director UK and Europe at Studiosity. Studiosity is AI-for-Learning, not corrections – to scale student success, empower educators, and improve retention with a proven 4.4x ROI, while ensuring integrity and reducing institutional risk.

    During September 2020, Studiosity launched the Professor Tracey Bretag Prize for Academic Integrity – an annual commitment to those who are advancing the understanding and implementation of academic integrity in the higher education sector, in honour of Tracey’s work as a researcher in the field of educational integrity.

    Tracey was one of the world’s leading experts on academic integrity, founding the International Journal for Educational Integrity and serving as Editor-in-Chief of the Handbook of Academic Integrity. She spoke widely and publicly on the importance of universities taking a strong stand on educating their students about academic integrity and enforcing the rules with vigour and strong sanctions.

    Tracey also came to work alongside the team at Studiosity, providing advice, guidance, and sharing her research at events. When asked for her permission to create an annual Academic Integrity award named in her honour, this was Tracey’s response:

    I am so deeply honoured by your suggestion that I am almost speechless. Thank you so much for coming up with such a fabulous idea, and especially for putting it in my name. … Thank you again for this incredible recognition of my very small contribution to the field of academic integrity. As I work hard every day to try to demonstrate the type of bravery I’ve always advocated, this certainly gives me a great deal of comfort.

    Tracey prematurely passed away on 7 October 2020. In February 2021, she was honoured posthumously with a Career Achievement Award from the Australian Awards for University Teaching.

    Entrants over time – a five-year overview

    Looking at the Award’s previous entries, we can see a clear shift in how institutions approach educational integrity:

    • from a more broad-based education about what constitutes misconduct in 2020;
    • towards more specialised training of large student groups;
    • to a significant pivot in 2023 towards integrity projects that address the challenge of AI – specifically led by assessment redesign and the use of whole-institution frameworks.

    Another change over time is certainly who and where integrity nominations are coming from – there are more dedicated institutional units for managing educational integrity now in 2025 than we saw in 2020-2021.

    Tracey earned a great deal of respect globally for her evidence-based, systemic, and students-first approaches to educational integrity. It is fitting that these approaches are gaining interest and momentum in higher education at this moment. We look forward to seeing another year of evidence-based nominations, and thank our Academic Advisory Board for their time and energy once again in judging.

    Feeling inspired?

    As senior leadership look for ways to ethically embed generative AI within their institutions, academic integrity – the original owner of the AI acronym – is paramount. And so for this year’s prize submissions, the expectation is that the 2025 shortlist will acknowledge gen-AI as part of the challenge, show evidence of impact, and help answer the question: How can the sector keep educational integrity, humanity, and learning at the heart of the student experience?

    Last year, the University of Greenwich won the UK prize for their initiative ‘Integrity Matters: Nurturing a culture of integrity through situational learning and play’. Staff there designed an interactive e-learning module (available to all education institutions under licence) designed to raise awareness of academic integrity. You can learn more here

    Sharon Perera, Head of Academic and Digital Sills who led the initiative said:

    We are thrilled to have been awarded the Tracey Bretag prize for advancing best practice and the impact of academic integrity in higher education. Thank you Studiosity for championing this in the sector.

    At the University of Greenwich our goal is to raise awareness of the academic conventions in research and writing and to create a culture of integrity. We are doing this through our student communities – by sharing best practice and learning about the challenges we face in the GenAI era.

    Academic integrity is at greater risk than ever in the age we live in, and we need to work together to celebrate integrity and authenticity.

    While sharing your initiative is for the good of the sector and a personal recognition of your tireless efforts to protect and nurture academic integrity – the prize also comprises a financial reward! You can enter this year’s prize here – nominations close 30 May. Evidence might be at the level of policy, implementation, measured student or staff participation, and/or other evidence of behaviour.

    Source link

  • Publishers Adopt AI Tools to Bolster Research Integrity

    Publishers Adopt AI Tools to Bolster Research Integrity

    The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publish—and peer review—research findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.

    Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journals—many have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.

    But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.

    “These AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,” Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. “AI tools can also be used to generate content and potentially increase research integrity risk. That’s why we’ve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.”

    However, most scholars aren’t yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60 percent said lack of guidelines and training keep them from using it in their work.

    In response, Wiley released new guidelines last week on “responsible and effective” uses of AI, aimed at deploying the technology to make the publishing process more efficient “while preserving the author’s authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,” according to a news release.

    Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates “precise summaries” to alleviate researchers’ challenges of “information overload, a shortage of time and the need for more effective ways to enhance existing knowledge,” according to a news release.

    Both of those announcements followed Springer Nature’s January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.

    “As the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,” Harsh Jegadeesan, Springer’s chief publishing officer, said in a news release. “By carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to them—conducting research.”

    ‘Obvious Financial Benefit’

    Academic publishing experts believe there are both advantages—and down sides—of involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.

    If use of AI assistants becomes the norm for peer reviewers, “the volume problem would be immediately gone from the industry” while creating an “obvious financial benefit” for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.

    But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could be—and may already be being—used to target terms or research lawmakers don’t like.

    “There are parts of peer review where a machine is definitely better than a human brain,” Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. “My concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents … That can become an element of censorship.”

    Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.

    “Just as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,” she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. “In the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.”

    But that doesn’t mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she “felt extra pressure to make sure that everything the author was reporting sounds doable” during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers “handling a large volume of submissions.”

    Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after “rigorous” testing.

    “The large language models are only as good as the information you give them,” she said. “We are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning … That’s the only way to get more successful and sustainable outcomes.”

    Not Equipped to Ensure Quality?

    Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “Anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing,” and “whether AI can do that effectively is a reasonable question.”

    But beyond that, the publishing industry’s embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.

    “The fact that publishers are now trumpeting the fact that they both are and need to be—according to them—using AI to fight paper mills and other bad actors is a bit of an admission they hadn’t been willing to make until recently: Their systems are not actually equipped to ensure quality,” Oransky said.

    “This is just more evidence that people are trying to shove far too much through the peer-review system,” he added. “That wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”

    Source link

  • National Advisory Committee on Institutional Quality and Integrity Meets February 19-20. (US Department of Education)

    National Advisory Committee on Institutional Quality and Integrity Meets February 19-20. (US Department of Education)

     

    Education Department

    Hearings, Meetings, Proceedings, etc.:

    National Advisory Committee on Institutional Quality and Integrity

    FR Document: 2025-01459
    Citation: 90 FR 7677 PDF Pages 7677-7679 (3 pages)
    Permalink
    Abstract: This notice sets forth the agenda, time, and instructions to access or participate in the February 19-20, 2025 meeting of NACIQI, and provides information to members of the public regarding the meeting, including requesting to make written or oral comments. Committee members will meet in-person while accrediting agency representatives and public attendees will participate virtually.

    Source link

  • How Students Can Use AI Without Violating Academic Integrity – Sovorel

    How Students Can Use AI Without Violating Academic Integrity – Sovorel

    For all of us in academia that are now working on properly developing AI Literacy within ourselves so as to then be able to develop AI Literacy within our students, we must ask ourselves how have we directly developed students to properly use AI in an ethical matter without violating academic integrity. We must ensure that we are taking all the necessary steps to set students up for success in multiple ways: Freshman orientation, school assemblies, posters, class discussions/activities, etc. all to help students understands that there are different ways of using AI and that its use is appropriate at times and not appropriate at other times.

    The associated Infographic has been designed to directly help students in understanding how to use AI in a proper manner and specifically in a way that will not violate academic integrity. For a full and detailed explanation of this infographic, please check out the associated video:

    AI Literacy is a necessity now. Students are already using AI. We in academia must ensure that they know not only how to use AI, but to use it effectively and ethically. Please use this infographic, please share this infographic with as many students and academics as possible so that we can help as many students as possible.

    What are your thoughts? What would you add to help students even more?

    Source link

  • How ChatGPT Can Help Prevent Violations of Academic Integrity – Sovorel

    How ChatGPT Can Help Prevent Violations of Academic Integrity – Sovorel

    A full article (including a video) describing each aspect of how ChatGPT can help with preventing violations of academic integrity (cheating) is provided in an article I wrote located here: https://brentaanders.medium.com/how-chatgpt-can-help-prevent-violations-of-academic-integrity-99ada37b52dd

    What are your thoughts on this or other aspects of ChatGPT and other AI in education? Leave a comment below.

    Source link