Tag: Tools

  • States have the tools to improve literacy — now they need to use them

    States have the tools to improve literacy — now they need to use them

    This audio is auto-generated. Please let us know if you have feedback.

    Bob Wise is a former governor of West Virginia and former member of the U.S. House of Representatives. Javaid Siddiqi is president and CEO of the Hunt Institute and a former Virginia secretary of education.

    Believe it or not, there is a clear path forward for literacy in the United States.

    A headshot of a person.

    Bob Wise

    Permission granted by The Hunt Institute

     

    We wouldn’t blame you to think otherwise given the grim outlook from the media, the federal government, and this year’s alarming results from The Nation’s Report Card. (In case you missed it — more than half of 4th and 8th graders are reading below proficiency. These are real children with futures at stake.)

    While the National Assessment of Educational Progress and other national measuring sticks sit on the chopping block with this presidential administration, the distressing trends emphasized by this year’s results spell disaster should we continue with current practices.

    But here’s a fresh perspective: Most states have already forged the tools to turn poor literacy performance into meaningful progress.

    In this polarized landscape, state policy and education leaders should take heart that education is a nonpartisan issue for much of the public. As a recent Hunt Institute survey showed, 89% of parents and voters across parties favor implementing evidence-based literacy programs in classrooms to improve student reading levels.

    According to 2024 data, 40 states and the District of Columbia have adopted major policy measures requiring these practices be incorporated at every stage of literacy education. The fact that so many red and blue states have recently adopted significant — and similar — legislative literacy initiatives shows widespread support.

    Passing laws is only part of the solution; effectively implementing them is now the major challenge.

    4th grade reading and math NAEP scores 2000-2024

    States are adopting major education literacy policy measures to improve student reading levels.

    That being said, education leaders must implement necessary pedagogy and maintain the necessary data to track improvement over the upcoming critical years.

    The most effective NAEP response from state education leaders would be to launch an intentional period of implementation of what is enacted into law. We’re seeing this in some states, but others, like Delaware and Iowa, have room for further action.

    A headshot of a person

    Javaid Siddiqi

    Permission granted by The Hunt Institute

     

    For state policymakers, the most meaningful implementation actions include:

    Signaling commitment through legislative oversight

    Legislators can drive improvement by exercising their traditional oversight role — holding hearings, reviewing implementation data, and identifying barriers. Education and finance committees should assess progress, address funding needs, and push for cross-agency collaboration. Few things prompt action from an agency like an invitation to testify at an oversight hearing.

    Ensuring statewide early literacy screening

    States should implement comprehensive early literacy screening strategies for pre-K through 3rd grade. Most states already require some form of screening to identify potential reading strengths and weaknesses. Tailoring instruction to each child’s individual needs and strengths is the critical first step in developing a personalized learning plan.

    Elevating teacher preparation

    Investing in the support, oversight, and improvement of teacher preparation programs is essential — how educators teach reading directly impacts student outcomes. Most states have mandated the removal of outdated curricula in favor of evidence-based practices rooted in the science of reading. 

    Through initiatives like The Hunt Institute’s Path Forward, states like Alabama are working to enhance teacher literacy training in partnership with educator preparation programs. While some institutions have made significant progress, many still need to align with best practices.

    Supporting district leadership

    Encourage and support school district leadership to adopt the necessary changes in policy and practice in every school and classroom. From tiny Buttonwillow in California to Birmingham Public Schools in Alabama, central office buy-in determines whether teaching practices evolve and reading performance improves.

    Significant change does not come on the cheap. States and districts must be prepared to provide funding for the necessary training for reading coaches, other forms of professional development, and relevant curriculum. Spend now to build strong readers, or pay more later in remediation.

    Source link

  • Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Half of Colleges Don’t Grant Students Access to Gen AI Tools

    Transformative. Disruptive. Game-changing. That’s how many experts continue to refer, without hyperbole, to generative AI’s impact on higher education. Yet more than two years after generative AI went mainstream, half of chief technology officers report that their college or university isn’t granting students institutional access to generative AI tools, which are often gratis and more sophisticated and secure than what’s otherwise available to students. That’s according to Inside Higher Ed’s forthcoming annual Survey of Campus Chief Technology/Information Officers with Hanover Research.

    There remains some significant—and important—skepticism in academe about generative AI’s potential for pedagogical (and societal) good. But with a growing number of institutions launching key AI initiatives underpinned by student access to generative AI tools, and increasing student and employer expectations around AI literacy, student generative AI access has mounting implications for digital equity and workforce readiness. And according to Inside Higher Ed’s survey, cost is the No. 1 barrier to granting access, ahead of lack of need and even ethical concerns.

    Ravi Pendse, who reviewed the findings for Inside Higher Ed and serves as vice president for information technology and chief information officer at the University of Michigan, a leader in granting students access to generative AI tools, wasn’t surprised by the results. But he noted that AI prompting costs, typically measured in units called tokens, have fallen sharply over time. Generative AI models, including open-source large language models, have proliferated over the same period, meaning that institutions have increasing—and increasingly less expensive—options for providing students access to tools.

    ‘Paralyzed’ by Costs

    “Sometimes we get paralyzed by, ‘I don’t have resources, or there’s no way I can do this,’ and that’s where people need to just lean in,” Pendse said. “I want to implore all leaders and colleagues to step up and focus on what’s possible, and let human creativity get us there.”

    According to the survey—which asked 108 CTOs at two- and four-year colleges, public and private nonprofit, much more about AI, digital transformation, online learning and other key topics—institutional approaches to student generative AI access vary. (The full survey findings will be released next month.)

    Some 27 percent of CTOs said their college or university offers students generative AI access through an institutionwide license, with CTOs at public nonprofit institutions especially likely to say this. Another 13 percent of all CTOs reported student access to generative AI tools is limited to specific programs or departments, with this subgroup made up entirely of private nonprofit CTOs. And 5 percent of the sample reported that students at their institution have access to a custom-built generative AI tool.

    Among community college CTOs specifically (n=22), 36 percent said that students have access to generative AI tools, all through an institutionwide license.

    Roughly half of institutions represented do not offer student access to generative AI tools. Some 36 percent of CTOs reported that their college doesn’t offer access but is considering doing so, while 15 percent said that their institution doesn’t offer access and is not considering it.

    Of those CTOs who reported some kind of student access to generative AI and answered a corresponding question about how they pay for it (n=45), half said associated costs are covered by their central IT budget; most of these are public institution CTOs. Another quarter said there are no associated costs. Most of the rest of this group indicated that funding comes from individual departments. Almost no one said costs are passed on to students, such as through fees.

    Among CTOs from institutions that don’t provide student access who responded to a corresponding question about why not (n=51), the top-cited barrier from a list of possibilities was costs. Ethical concerns, such as those around potential misuse and academic integrity, factored in, as well, followed by concerns about data privacy and/or security. Fewer said there is no need or insufficient technical expertise to manage implementation.

    “I very, very strongly feel that every student that graduates from any institution of higher education must have at least one core course in AI, or significant exposure to these tools. And if we’re not doing that, I believe that we are doing a disservice to our students,” Pendse said. “As a nation we need to be prepared, which means we as educators have a responsibility. We need to step up and not get bogged down by cost, because there are always solutions available. Michigan welcomes the opportunity to partner with any institution out there and provide them guidance, all our lessons learned.”

    The Case for Institutional Access

    But do students really need their institutions to provide access to generative AI tools, given that rapid advances in AI technology also have led to fewer limitations on free, individual-level access to products such as ChatGPT, which many students have and can continue to use on their own?

    Experts such as Sidney Fernandes, vice president and CIO of the University of South Florida, which offers all students, faculty and staff access to Microsoft Copilot, say yes. One reason: privacy and security concerns. USF users of Copilot Chat use the tool in a secure, encrypted environment to maintain data privacy. And the data users share within USF’s Copilot enterprise functions—which support workflows and innovation—also remains within the institution and is not used to train AI models.

    There’s no guarantee, of course, that students with secure, institutional generative AI accounts will use only them. But at USF and beyond, account rollouts are typically accompanied by basic training efforts—another plus for AI literacy and engagement.

    “When we offer guidance on how to use the profiles, we’ve said, ‘If you’re using the commercially available chat bots, those are the equivalent of being on social media. Anything you post there could be used for whatever reason, so be very careful,” Fernandes told Inside Higher Ed.

    In Inside Higher Ed’s survey, CTOs who reported student access to generative AI tools by some means were no more likely than the group over all to feel highly confident in their institution’s cybersecurity practices—although CTOs as a group may have reason to worry about students and cybersecurity generally: Just 26 percent reported their institution requires student training in cybersecurity.

    Colleges can also grant students access to tools that are much more powerful than freely available and otherwise prompt-limited chat bots, as well as tools that are more integrated into other university platforms and resources. Michigan, for instance, offers students access to an AI assistant and another conversational AI tool, plus a separate tool that can be trained on a custom dataset. Access to a more advanced and flexible tool kit for those who require full control over their AI environments and models is available by request.

    Responsive AI and the Role of Big Tech

    Another reason for institutions to lead on student access to generative AI tools is cultural responsiveness, as AI tools reflect the data they’re trained on, and human biases often are baked into that data. Muhsinah Morris, director of Metaverse programs at Morehouse College, which has various culturally responsive AI initiatives—such as those involving AI tutors that look like professors—said it “makes a lot of sense to not put your eggs in one basket and say that basket is going to be the one that you carry … But at the end of the day, it’s all about student wellness, 24-7, personalized support, making sure that students feel seen and heard in this landscape and developing skills in real time that are going to make them better.”

    The stakes of generative AI in education, for digital equity and beyond, also implicate big tech companies whose generative AI models and bottom lines benefit from the knowledge flowing from colleges and universities. Big tech could therefore be doing much more to partner on free generative AI access with colleges and universities, and not just on the “2.0” and “3.0” models, Morris said.

    “They have a responsibility to also pour back into the world,” she added. “They are not off the hook. As a matter of fact, I’m calling them to the carpet.”

    Jenay Robert, senior researcher at Educause, noted that the organization’s 2025 AI Landscape Study: Into the Digital AI Divide found that more institutions are licensing AI tools than creating their own, across a variety of capabilities. She said digital equity is “certainly one of the biggest concerns when it comes to students’ access to generative AI tools.” Some 83 percent of respondents in that study said they were concerned about widening the digital divide as an AI-related risk. Yet most respondents were also optimistic about AI improving access to and accessibility of educational materials.

    Of course, Robert added, “AI tools won’t contribute to any of these improvements if students can’t access the tools.” Respondents to the Educause landscape study from larger institutions were more likely those from smaller ones to report that their AI-related strategic planning includes increasing access to AI tools.

    Inside Higher Ed’s survey also reveals a link between institution size and access, with student access to generative AI tools through an institutionwide license, especially, increasing with student population. But just 11 percent of CTOs reported that their institution has a comprehensive AI strategy.

    Still, Robert cautioned that “access is only part of the equation here. If we want to avoid widening the digital equity divide, we also have to help students learn how to use the tools they have access to.”

    In a telling data point from Educause’s 2025 Students and Technology Report, more than half of students reported that most or all of their instructors prohibit the use of generative AI.

    Arizona State University, like Michigan, collaborated early on with OpenAI, but it has multiple vendor partners and grants student access to generative AI tools through an institutionwide license, through certain programs and custom-built tools. ASU closely follows generative AI consumption in a way that allows it to meet varied needs across the university in a cost-effective manner, as “the cost of one [generative AI] model versus another can vary dramatically,” said Kyle Bowen, deputy CIO.

    “A large percentage of students make use of a moderate level of capability, but some students and faculty make use of more advanced capability,” he said. “So everybody having everything may not make sense. It may not be very cost-sustainable. Part of what we have to look at is what we would describe as consumption-based modeling—meaning we are putting in place the things that people need and will consume, not trying to speculate what the future will look like.”

    That’s what even institutions with established student access are “wrestling with,” Bowen continued. “How do we provide that universal level of AI capability today while recognizing that that will evolve and change, and we have to be ready to have technology for the future, as well, right?”

    Source link

  • 5 AI tools for classroom creativity

    5 AI tools for classroom creativity

    Key points:

    • AI tools enhance K-12 creativity and innovation through interactive projects
    • A new era for teachers as AI disrupts instruction
    • Report details uneven AI use among teachers, principals
    • For more news on AI and creativity, visit eSN’s Digital Learning hub

    As AI becomes more commonplace in classrooms, it gives students access to creative tools that enhance learning, exploration, and innovation. K-12 students can use AI tools in various ways to boost creativity through art, storytelling, music, coding, and more.

    More News from eSchool News

    HVAC projects to improve indoor air quality. Tutoring programs for struggling students. Tuition support for young people who want to become teachers in their home communities.

    Almost 3 in 5 K-12 educators (55 percent) have positive perceptions about GenAI, despite concerns and perceived risks in its adoption, according to updated data from Cengage Group’s “AI in Education” research series.

    Our school has built up its course offerings without having to add headcount. Along the way, we’ve also gained a reputation for having a wide selection of general and advanced courses for our growing student body.

    Ensuring that girls feel supported and empowered in STEM from an early age can lead to more balanced workplaces, economic growth, and groundbreaking discoveries.

    In my work with middle school students, I’ve seen how critical that period of development is to students’ future success. One area of focus in a middle schooler’s development is vocabulary acquisition.

    For students, the mid-year stretch is a chance to assess their learning, refine their decision-making skills, and build momentum for the opportunities ahead.

    Middle school marks the transition from late childhood to early adolescence. Developmental psychologist Erik Erikson describes the transition as a shift from the Industry vs. Inferiority stage into the Identity vs. Role Confusion stage.

    Art has a unique power in the ESL classroom–a magic that bridges cultures, ignites imagination, and breathes life into language. For English Language Learners (ELLs), it’s more than an expressive outlet.

    In the year 2025, no one should have to be convinced that protecting data privacy matters. For education institutions, it’s really that simple of a priority–and that complicated.

    Teachers are superheroes. Every day, they rise to the challenge, pouring their hearts into shaping the future. They stay late to grade papers and show up early to tutor struggling students.

    Want to share a great resource? Let us know at [email protected].

    Source link

  • AI Detection Tools Are Powerful When Instructors Know How to Use Them

    AI Detection Tools Are Powerful When Instructors Know How to Use Them

    To the editor:

    I’m sympathetic to the overall thrust of Steven Mintz’s argument in Inside Higher Ed, “Writing in the Age of AI Suspicion” (April 2, 2025). AI-detection programs are unreliable. To the degree that instructors rely on AI detection, they contribute to the erosion of trust between instructors and students—not a good thing. And because AI “detection” works by assessing things like the smoothness or “fluency” in writing, they implicitly invert our values: We are tempted to have higher regard for less structured or coherent writing, since it strikes us as more authentic.

    Mintz’s article is potentially misleading, however. He repeatedly testifies that in testing the detection software, his and other non-AI-produced writing yielded certain scores as “percent AI generated.” For instance, he writes, “27.5 percent of a January 2019 piece … was deemed likely to contain AI-generated text.” Although the software Mintz used for this exercise (ZeroGPT) does claim to identify “how much” of the writing it flags as AI-generated, many other AI detectors (e.g., chatgptzero) indicate rather the degree of probability that the writing as a whole was written by AI. Both types of data are imperfect and problematic, but they communicate different things.

    Again, Mintz’s argument is useful. But if conscientious instructors are going to take a stand against technologies on empirical or principled grounds, they will do well to demonstrate appreciation for the nuances of the various tools. 

    Christopher Richmann is the associate director of the Academy for Teaching and Learning and affiliate faculty in the Department of Religion at Baylor University.

    Source link

  • The Tools Helping University Students Succeed After Graduation (Post College Journey)

    The Tools Helping University Students Succeed After Graduation (Post College Journey)

    Seattle, Wash.– As thousands of university students graduate each year, many find themselves
    facing an unexpected challenge: career uncertainty. Despite earning degrees, a large portion of
    graduates report feeling unprepared to enter the workforce. Post-college career expert Laurie
    Nilo-Klug
    is tackling this issue head-on, providing students with the tools they need to build
    confidence and thrive in their careers.

    Ms. Nilo-Klug, an Adjunct Professor at Seattle University and the founder of Post College
    Journey
    , has dedicated her work to helping students transition from college to the professional
    world. Through her programs, Laurie has empowered students to take control of their career
    paths, addressing common issues such as imposter syndrome, skill uncertainty, and job market
    navigation.

     

    After implementing her career confidence-building tools in the classroom, Laurie observed a
    remarkable 60% increase in student confidence levels. “Many students leave college with
    impressive degrees but lack the self-assurance to effectively launch their careers. 

    My goal is to bridge that gap with actionable strategies that instill confidence and competence,” says Laurie. Laurie explains, “In a recent assignment, I had students choose two career exploration activities, and their selections revealed a strong drive to connect classroom learning with their post-college goals. 

    Their enthusiasm for hands-on experiences, such as job applications and simulations, highlighted the critical need for practical, real-world learning opportunities. After gathering student feedback and analyzing the data, I found a 60% increase in their career confidence levels. This reinforced my belief that early and direct exposure to career exploration is essential for student success.”

    In this activity, students were tasked with selecting two career exploration activities from the
    following options:

    ● Attending a career development event;
    ● Having an appointment with the career center;
    ● Joining a student club;
    ● Doing a career self-assessment
    ● Applying to a job;
    ● Or completing a job simulation and then reflecting on what they have learned.

    This assignment aimed to show that career development offers many paths, so it’s crucial to
    understand why you choose an activity, what you hope to gain, and reflect on what you learn.
    Laurie expected students to pick low-effort options like self-assessments or joining a club, given
    their frequent concerns about time constraints. Instead, nearly all chose job simulations or
    applied for a job, showing a strong preference for hands-on experience.

    For media inquiries or to schedule an interview with Laurie Nilo-Klug, please contact:
    Marisa Spano
    [email protected]

    Source link

  • AI-Powered Teaching: Practical Tools for Community College Faculty – Faculty Focus

    AI-Powered Teaching: Practical Tools for Community College Faculty – Faculty Focus

    Source link

  • AI-Powered Teaching: Practical Tools for Community College Faculty – Faculty Focus

    AI-Powered Teaching: Practical Tools for Community College Faculty – Faculty Focus

    Source link

  • Report details uneven AI use among teachers, principals

    Report details uneven AI use among teachers, principals

    Key points:

    English/language arts and science teachers were almost twice as likely to say they use AI tools compared to math teachers or elementary teachers of all subjects, according to a February 2025 survey from the RAND Corporation that delves into uneven AI adoption in schools.

    “As AI tools and products for educational purposes become more prevalent, studies should track their use among educators. Researchers could identify the particular needs AI is addressing in schools and–potentially–guide the development of AI products that better meet those needs. In addition, data on educator use of AI could help policymakers and practitioners consider disparities in that use and implications for equitable, high-quality instruction across the United States,” note authors Julia H. KaufmanAshley WooJoshua EaganSabrina Lee, and Emma B. Kassan.

    One-quarter of ELA, math, and science teachers used AI tools for instructional planning or teaching in the 2023–2024 school year. Nearly 60 percent of surveyed principals also reported using AI tools for their work in 2023-2024.

    Among the one-quarter of teachers nationally who reported using AI tools, 64 percent said that they used them for instructional planning only, whether for their ELA, math, or science instruction; only 11 percent said that they introduced them to students but did not do instructional planning with them; and 25 percent said that they did both.

    Although one-quarter of teachers overall reported using AI tools, the report’s authors observed differences in AI use by subject taught and some school characteristics. For instance, close to 40 percent of ELA or science teachers said they use AI, compared to 20 percent of general elementary education or math teachers. Teachers and principals in higher-poverty schools were less likely to report using AI tools relative to those in lower-poverty schools.

    Eighteen percent of principals reported that their schools or districts provided guidance on the use of AI by staff, teachers, or students. Yet, principals in the highest-poverty schools were about half as likely as principals in the lowest-poverty schools to report that guidance was provided (13 percent and 25 percent, respectively).

    Principals cited a lack of professional development for using AI tools or products (72 percent), concerns about data privacy (70 percent) and uncertainty about how AI can be used for their jobs (70 percent) as factors having a major or minor influence on their AI use.

    The report also offers recommendations for education stakeholders:

    1. All districts and schools should craft intentional strategies to support teachers’ AI use in ways that will most improve the quality of instruction and student learning.

    2. AI developers and decision-makers should consider what useful AI applications have the greatest potential to improve teaching and learning and how to make those applications available in high-poverty contexts.

    3. Researchers should work hand-in-hand with AI developers to study use cases and develop a body of evidence on effective AI applications for school leadership, teaching, and learning.

    Laura Ascione
    Latest posts by Laura Ascione (see all)

    Source link

  • Publishers Adopt AI Tools to Bolster Research Integrity

    Publishers Adopt AI Tools to Bolster Research Integrity

    The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publish—and peer review—research findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.

    Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journals—many have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.

    But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.

    “These AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,” Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. “AI tools can also be used to generate content and potentially increase research integrity risk. That’s why we’ve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.”

    However, most scholars aren’t yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60 percent said lack of guidelines and training keep them from using it in their work.

    In response, Wiley released new guidelines last week on “responsible and effective” uses of AI, aimed at deploying the technology to make the publishing process more efficient “while preserving the author’s authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,” according to a news release.

    Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates “precise summaries” to alleviate researchers’ challenges of “information overload, a shortage of time and the need for more effective ways to enhance existing knowledge,” according to a news release.

    Both of those announcements followed Springer Nature’s January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.

    “As the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,” Harsh Jegadeesan, Springer’s chief publishing officer, said in a news release. “By carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to them—conducting research.”

    ‘Obvious Financial Benefit’

    Academic publishing experts believe there are both advantages—and down sides—of involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.

    If use of AI assistants becomes the norm for peer reviewers, “the volume problem would be immediately gone from the industry” while creating an “obvious financial benefit” for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.

    But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could be—and may already be being—used to target terms or research lawmakers don’t like.

    “There are parts of peer review where a machine is definitely better than a human brain,” Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. “My concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents … That can become an element of censorship.”

    Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.

    “Just as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,” she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. “In the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.”

    But that doesn’t mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she “felt extra pressure to make sure that everything the author was reporting sounds doable” during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers “handling a large volume of submissions.”

    Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after “rigorous” testing.

    “The large language models are only as good as the information you give them,” she said. “We are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning … That’s the only way to get more successful and sustainable outcomes.”

    Not Equipped to Ensure Quality?

    Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “Anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing,” and “whether AI can do that effectively is a reasonable question.”

    But beyond that, the publishing industry’s embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.

    “The fact that publishers are now trumpeting the fact that they both are and need to be—according to them—using AI to fight paper mills and other bad actors is a bit of an admission they hadn’t been willing to make until recently: Their systems are not actually equipped to ensure quality,” Oransky said.

    “This is just more evidence that people are trying to shove far too much through the peer-review system,” he added. “That wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”

    Source link

  • AI tools deepening divides in graduate outcomes (opinion)

    AI tools deepening divides in graduate outcomes (opinion)

    Since OpenAI first released ChatGPT in November 2022, early adopters have been informing the public that artificial intelligence will shake up the world of work, with everything from recruitment to retirement left unrecognizable. Ever more cautious than the private sector, higher ed has been slow to respond to AI technologies. Such caution has opened a divide within the academy, with the debate often positioned as AI optimism versus pessimism—a narrow aperture that leaves little room for realistic discussion about how AI is shaping student experience.

    In relation to graduate outcomes (simply put, where students end up after completing their degrees, with a general focus on careers and employability), universities are about to grapple with the initial wave of graduates seriously impacted by AI. The Class of 2025 will be the first to have widespread access to large language models (LLMs) for the majority of their student lives. If, as we have been repeatedly told, we believe that AI will be the “great leveler” for students by transforming their access to learning, then it follows that graduate outcomes will be significantly impacted. Most importantly, we should expect to see more students entering careers that meaningfully engage with their studies.

    The reality on the ground presents a stark difference. Many professionals working in career advice and guidance are struggling with the opposite effect: Rather than acting as the great leveler, AI tools are only deepening existing divides.

    1. Trust Issues: Student Overreliance on AI Tools

    Much has been said about educators’ ability to trust student work in a post-LLM landscape. Yet, when it comes to student outcomes, a more pressing concern is students’ trust in AI tools. As international studies show, a broad range of sectors is already placing too much faith in AI, failing to put proper checks and balances in place. If businesses beholden to regulatory bodies and investors are left vulnerable, then time-poor students seeking out quick-fix solutions are faring worse.

    This is reflected in what we are seeing on the ground. We were both schoolteachers when ChatGPT launched and both now work in student employability. As is common, the issues we first witnessed in the school system are now being borne out in higher ed: Students often implicitly trust that AI will perform tasks better than they are able to. This means graduates are using AI to write CVs, cover letters and other digital documentation without first understanding why such documentation is needed. Although we are seeing a generally higher (albeit more generic) caliber of writing, when students are pressed to expand upon their answers, they struggle to do so. Overreliance on AI tools is deskilling students by preventing them from understanding the purpose of their writing, thereby creating a split between what a candidate looks like on paper and how they present in real life. Students can only mask a lack of skills for so long.

    1. The Post-Pandemic Social Skills Deficit

    The generation of students now arriving at university were in their early teens when the pandemic hit. This long-term disruption to schooling had a profound impact on social and emotional skills, and, crucially, learning loss also impacted students from disadvantaged backgrounds at a much higher rate. With these students now moving into college, many are turning to AI to try and ameliorate feelings of being underprepared.

    Such a skills gap is tangible when working with students. Those who already present high levels of critical thinking and independence can use AI tools in an agile manner, writing more effective prompts before tailoring and enhancing answers. Conversely, those who struggle with literacy are often unable to properly evaluate how appropriate the answers provided by AI are.

    What we are seeing is high-performing students using AI to generate more effective results, outpacing their peers and further entrenching the divide. Without intervention, the schoolchildren who couldn’t answer comprehensions questions such as “What does this word mean?” about their own AI-generated homework are set to become the graduates left marooned at interview where they can no longer hide behind writing. The pandemic has already drawn economic battle lines for students in terms of learning loss, attainment and the very awarding of student grades—if we are not vigilant, inequitable AI use is set to become a further barrier to entry for those from disadvantaged backgrounds.

    1. Business Pivots, Higher Ed Deliberates

    Current graduates are entering a tough job market. Reports have shown both that graduate-level job postings are down and that employers are fatigued by high volumes of AI-written job applications. At the same time, employers are increasingly turning to AI to transform hiring processes. Students are keenly attuned to this, with many reporting low morale that their “dream role” is now one that AI will fulfill or one that they can see becoming replaced by AI in the near future.

    Across many institutions, higher education career advice and guidance is poorly equipped to deal with such changes, still often rooted in an outdated model that is focused on traditional job markets and the presumption that students will follow a “one degree, one career” trajectory, when the reality is most students do not follow linear career progression. Without swift and effective changes that respond to how AI is disrupting students’ career journeys, we are unable to make targeted interventions that reflect the job market and therefore make a meaningful impact.

    Nonetheless, such changes are where higher education career advice and guidance services can make the greatest impact. If we hope to continue leveling the playing field for students who face barriers to entry, we must tackle AI head-on by teaching students to use tools responsibly and critically, not in a general sense, but specifically to improve their career readiness.

    Equally, career plans could be forward-thinking and linked to the careers created by AI, using market data to focus on which industries will grow. By evaluating student need on our campuses and responding to the movements of the current job market, we can create tailored training that allows students to successfully transition from higher education into a graduate-level career.

    If we fail to achieve this and blindly accept platitudes around AI improving equity, we risk deepening structural imbalances among students that uphold long-standing issues in graduate outcomes.

    Sean Richardson is a former educator and now the employability resources manager at London South Bank University.

    Paul Redford is a former teacher, now working to equip young people with employability skills in television and media.

    Source link