Tag: Weigh

  • ‘A fair deal’ or a ‘surrender’? Stakeholders weigh in on Trump-UVA agreement

    ‘A fair deal’ or a ‘surrender’? Stakeholders weigh in on Trump-UVA agreement

    This audio is auto-generated. Please let us know if you have feedback.

    In the hours and days following the University of Virginia’s deal with the U.S. Department of Justice, the state’s governor cheered the agreement while some faculty and Democratic lawmakers have accused the public flagship of submitting to the Trump administration and enabling it to exert further pressure on other colleges.

    Under the four-page agreement, the DOJ will pause on five investigations in exchange for UVA’s adoption of the agency’s July guidance against diversity, equity and inclusion efforts. The public research institution, which had made DEI work a tentpole of its institutional mission in recent years, will also provide the DOJ with quarterly reports demonstrating its compliance.

    The deal — the first the Trump administration has struck with a public collegecould serve as a template moving forward, as the federal government takes other steps to exert control over the higher education sector.  

    ‘A sad day for UVA’ 

    UVA Interim President Paul Mahoney, who signed the bargain with the Trump administration, said that it came about “after months of discussions with DOJ” and input from the university’s leadership, governing board and internal and external legal counsel.

    The deal, he said in his late Wednesday announcement, is “the best available path forward” for UVA.

    The university will review its practices and policies to make sure they comply with federal law, Mahoney said, adding that “some work remains to be done to satisfy fully the terms of this agreement.”

    “We will also redouble our commitment to the principles of academic freedom, ideological diversity, free expression, and the unyielding pursuit of ‘truth, wherever it may lead,’” he said, quoting UVA founder Thomas Jefferson.

    If UVA “completes its planned reforms prohibiting DEI” through Dec. 31, 2028, the DOJ will formally close its investigations, the agency said in a Wednesday press release.

    Much of Mahoney’s announcement focused on what the deal does not include, noting it doesn’t require the university to pay the federal government or involve external monitoring. The deal also does not require UVA to admit wrongdoing, according to a university FAQ.

    But some faculty quickly voiced concerns.

    Kimberly Acquaviva, a nursing professor at UVA, shamed Mahoney and the university’s governing board “for trading UVA’s independence for federal favor.”

    “It’s a sad day for UVA,” she said on social media.

    Another UVA professor, Walter Heinecke, called the deal “a wolf in sheep’s clothing” that will “increase the likelihood that there’s a climate of fear.”

    It saddles the next president with expectations of monitoring that are highly problematic,” Heinecke told WVIR. “Which will in turn affect the way that faculty, students, staff think about what they can and cannot do.”

    UVA did not respond to questions Friday.

    Lawmakers weigh in

    Reactions from prominent lawmakers in Virginia — a contentious purple state with an election next month that could alter party control — have fallen along party lines.

    Virginia Senate Majority Leader Scott Surovell, a Democrat, called the deal a “surrender” on UVA’s part that has “significant constitutional problems.”

    The agreement “represents a huge expansion of federal power that Republicans have would have never tolerated in the past,” he said Wednesday. “We have the right to run our universities.”

    Republican Gov. Glenn Youngkin — who faces a fight with a Democrat-controlled Senate committee over his picks for UVA’s governing board — praised the agreement as “common sense and a fair deal” and said it embraces academic freedom and protects free speech.

    All UVA must do, he said in a social media post, is “fully comply with federal civil rights law.”

    Under the deal, the university will also operate under the DOJ’s wide-ranging DEI guidance. In addition to condemning race-focused scholarships and resources dedicated to specific racial or ethnic groups, the nine-page document warned colleges against using “facially neutral” criteria the agency deems to be proxies for federally protected characteristics, such as cultural competence. 

    Colleges or other institutions that violate the guidance, DOJ said, could lose federal funding.

    Source link

  • Students Weigh In on AI-Assisted Job Searches

    Students Weigh In on AI-Assisted Job Searches

    Employers say they want students to have experience using artificial intelligence tools, but most students in the Class of 2025 are not using such tools for the job hunt, according to a new survey.

    The study, conducted by the National Association of Colleges and Employers, included data from 1,400 recent graduates.

    Students who do use AI tools for their job search most commonly apply them to writing cover letters (65 percent), preparing for interviews (64 percent) and tailoring their résumés to specific positions (62 percent). In an Oct. 14 webinar hosted by NACE, students explained the benefits of using AI when searching for career opportunities.

    Among student job seekers who don’t employ AI, nearly 30 percent of respondents said they had ethical concerns about using the tools, and 25 percent said they lacked the expertise to apply them to their job search. An additional 16 percent worried about an employer’s reaction to AI-assisted applications, and 15 percent expressed concern about personal data collection.

    “If you listen to the media hype, it’s that everybody’s using AI and all of these students who are graduating are flooding the market with applications because of AI, et cetera,” NACE CEO Shawn Van Derziel said during the webinar. “What we’re finding in our data is that’s just not the case.”

    About one in five employers use AI in recruiting efforts, according to a separate NACE study.

    Students say: Brandon Poplar, a senior at Delaware State University studying finance and prelaw, said during the webinar that he uses AI for internship searches.

    “It has been pretty successful for me; I’ve been able to use it to tailor my résumé, which I think is almost the cliché thing to do now,” Poplar said. “Even to respond to emails from employers, it’s allowed me to go through as many applications as I can and find things that fit my niche.”

    Through his AI-assisted searches, Poplar learned he’s interested in management consulting roles and then determined how to best align his cover letters to communicate that to an employer.

    Morgan Miles, a senior at Spelman College majoring in economics, said she used a large language model to create a résumé that fits an insurance role, despite not having experience in the insurance industry. “I ended up actually getting a full-time offer,” Miles said.

    She prefers to use an AI-powered chatbot rather than engage with career center staff because it’s convenient and provides her with a visual checklist of her next steps, she said, whether that’s prepping for interview questions or figuring out what skills she needs to add to application materials.

    Panelists at the webinar didn’t think using ChatGPT was “cheating” the system but rather required human creativity and input. “It can be a tool to align with your values and what you’re marketing to the employers and still being yourself,” said Dandrea Oestricher, a recent graduate of the City College of New York.

    Maria Wroblewska, a junior at the University of California, Irvine, where she works as a career center intern, said she was shocked by how few students said they use AI. “I use it pretty much every time I search for a job,” to investigate the company, past internship offerings and application deadlines, she said.

    Other student trends: NACE leaders also shared results from the organization’s 2025 Student Survey, which included responses from 13,000 students across the U.S.

    The job market continues to present challenges for students, with the average senior submitting 30 job applications before landing a role, according to the survey. In recent years that number has skyrocketed, said Josh Kahn, associate director of research and public policy at NACE. “It was about 16 or 17, if I remember correctly, two years ago. That is quite large growth in just two years,” he said during the webinar.

    Students who met with an employer representative or attended a job fair were more likely to apply for additional jobs, but they were also more likely to report that the role they were hired in is related to their major program.

    Students who used an AI search engine (approximately 15 percent of all respondents) were more likely to apply for jobs—averaging about 60 applications—and less likely to say the job they landed matched their major. “That was a little surprising,” Kahn said. “It does line up anecdotally with what we’re hearing about AI’s impact on the number of applications that employers are receiving.”

    Two in five students said they’d heard the term “skills-based hiring” and understood what it meant, while one-third had never heard the term and one-quarter weren’t sure.

    Student panelists at the webinar said they experienced skills-based hiring practices during their internship applications, when employers would instruct them to complete a work exercise to demonstrate technical skills.

    NACE’s survey respondents completed 1.26 internships on average and received 0.78 job offers. A majority of internships took place in person (79 percent) or in a hybrid format (16 percent). Almost two-thirds of interns were paid (62 percent), which is the highest rate NACE has seen in the past seven years, Kahn said. Seven in 10 students said they did not receive a job offer from their internship employer.

    Do you have a career-focused intervention that might help others promote student success? Tell us about it.

    Source link

  • How Prospective Families Weigh Online and Hybrid College Options

    How Prospective Families Weigh Online and Hybrid College Options

    Nearly half of all students worldwide have engaged in online learning.

    Online and hybrid education have shifted from emergency responses during the COVID-19 pandemic to permanent, influential forces reshaping education from kindergarten to high school to higher education. Once seen as supplemental, these models play a central role in how students, families, and institutions approach learning, access, and opportunity.

    Full online enrollment remains rare in grades K-12, with just 0.6% of U.S. public school students fully online. However, hybrid learning is widespread, with 63% of students using online tools daily (National Center for Education Statistics, 2023). Globally, nearly half of all students have engaged in online learning, fueling a K–12 online education market valued at more than 171 billion U.S. dollars (Devlin Peck, n.d.; Yellow Bus ABA, n.d.).

    In higher education, the shift is even more pronounced. By 2023, over half of U.S. college students had taken at least one online course, and over one-quarter were enrolled exclusively online (National Center for Education Statistics, 2023; BestColleges, 2023). Adult learners and graduate students have been especially drawn to online programs, attracted by the flexibility and accessibility they offer (Arizton Advisory & Intelligence, 2023).

    But the numbers alone do not tell the whole story. To understand the future of online and hybrid learning, we need to listen to families, not as bystanders, but as essential decision-makers, advocates, and partners in shaping students’ educational journeys.

    What families and students think, and why it matters

    Across education levels, families appreciate the flexibility of online and hybrid models but consistently voice concerns about academic rigor, social connection, and equitable access.

    In K–12, parents generally prefer in-person schooling but want schools to improve the quality of online options (Barnum, 2020; Dong, Cao, & Li, 2020; Garbe, Ogurlu, Logan, & Cook, 2020). Adult and international students in higher education often rely on online programs to balance work and family demands. However, they face barriers such as isolation, inconsistent internet access, and limited interaction with peers and faculty (Kibelloh & Bao, 2014).

    Research underscores that strong course design is essential for satisfaction and success (Babb, Stewart, & Johnson, 2010; Detyna & Koch, 2023) and that social connection is not a luxury but a critical factor in persistence and well-being (Tayebinik & Puteh, 2012). Equity gaps also loom large: students without access to reliable devices, broadband, or support networks face steeper challenges (Eduljee, Murphy, Emigh-Guy, & Croteau, 2023; Neece, McIntyre, & Fenning, 2020).

    Families’ pandemic experiences reinforce these themes. Many described overwhelming stress and inequities that left them skeptical of online learning without stronger support and communication (Dong et al., 2020; Garbe et al., 2020; Neece et al., 2020).

    Key findings: What families want, and what budget cuts threaten

    The RNL, Ardeo, and CampusESP (2025) Prospective Family Engagement Report surveyed 9,467 families of prospective college students, offering rare insight into how families view online and hybrid education not just in theory, but as a meaningful factor in enrollment decisions.

    1. Families are cautious about fully online. Only 11% said they would consider a fully online experience for their student. In contrast, about 60% were open to hybrid models, which they saw as the “best of both worlds,” combining affordability, flexibility, and connection.

    2. First-generation families are more open. Nearly one in five said they would consider fully online, and 60% were open to hybrid options. These pathways can be lifelines, but cuts to advising, technology, or aid risk undermining that promise.

    3. Income divides are stark. Families earning under $60,000 were twice as likely to express interest in fully online compared to higher-income families. Yet as state funding declines, public colleges may raise tuition or online fees, making even “affordable” pathways harder to access.

    4. Race and ethnicity matter. Black and Hispanic families showed greater openness to online and hybrid formats than Asian or White families. That opportunity will only expand if institutions sustain culturally responsive communication, peer representation, and targeted support.

    5. Generational and gender differences are shifting demand. Younger parents and female caregivers are more comfortable with online and hybrid learning. Demand will keep growing, but families may see online options as second-class without continued investments in quality and communication.

    6. Region matters, too. Families in the Great Lakes and Far West regions were more receptive to online learning, while New England families leaned more traditional. These cultural and infrastructural differences should shape institutional strategies.

    These findings show that online and hybrid education hold real promise, especially for families seeking flexibility, affordability, and access. But that promise rests on a fragile foundation. Budget cuts threaten the very investments that make these models credible: faculty development, instructional design, technology, and support services. Without them, families’ trust could erode.

    What this means for colleges: Practical implications

    The research points to clear takeaways for colleges and universities:

    • Flexibility matters, but only if paired with quality. Families want flexible options backed by evidence of rigor, outcomes, and strong faculty engagement.
    • Hybrid is a strength, not a compromise. Market it as a high-quality “best of both worlds,” not a fallback option.
    • Equity-focused support is critical. Expand device loan programs, connectivity grants, and first-generation mentoring to close gaps.
    • Culturally tailored communication builds trust. Engage families with inclusive outreach and visible peer representation.
    • Generational shifts mean rising demand. Younger parents are more open to online and hybrid; invest now to meet tomorrow’s expectations.
    • Regional strategy matters. Align program design and marketing with local cultures, broadband realities, and institutional density.
    2025 Prospective Family Engagement Report2025 Prospective Family Engagement Report

    Ultimately, this is about listening. For some families, online pathways may be the only way higher education is possible. For others, a hybrid model that blends connection with convenience is the right fit. Institutions that understand these diverse perspectives and invest in the structures that support them will be best positioned to earn families’ trust and help students thrive.

    For more insights, read the 2025 Prospective Family Engagement Report from RNL, CampusESP, and Ardeo.

    References

    Source link

  • University Leaders Weigh Changes to Research Funding Model

    University Leaders Weigh Changes to Research Funding Model

    After the National Institutes of Health tried earlier this year to cut funding for universities’ costs indirectly related to research and set off alarm bells across higher education, 10 higher education associations decided to come up with their own model for research funding rather than having the government take the lead.

    Now, after just over six weeks of work, that group known as the Joint Associations Group is homing in on a plan to rework how the government funds research, and they want feedback from the university research community before they present a proposal to Congress and the Trump administration at the end of the month.

    “Unfortunately, something is going to change,” said Barbara Snyder, president of the Association of American Universities. “Either we will be part of it or it will be imposed upon us … Significant division in the research community is going to kill us.”

    Snyder and other JAG members said at a virtual town hall Tuesday that the current system for direct and indirect research funding costs has served the community well, but it isn’t transparent and leads to confusion about how the rates are calculated, among other challenges. AAU and other higher ed groups sued the NIH in February after the agency proposed capping indirect expenses for all institutions at 15 percent of the direct research costs—down from the average of 28 percent. (Historically, colleges negotiate their own reimbursement rates directly with the federal government.)

    The White House said the cap would make more money available for “legitimate scientific research,” but universities warned that the change would halt lifesaving research and lead to job losses, among other consequences. The NIH rate cap would mean a cut of $4 billion for university-based research.

    Court challenges have since halted the NIH plan, as well as similar caps proposed by two other federal agencies; meanwhile, the Department of Defense is working on its own plan related to indirect costs. Snyder said the lawsuits are about fiscal year 2025, while the JAG effort looks ahead to fiscal year 2026 and beyond.

    Over the years, Congress and federal agencies have sought to rethink the funding model but didn’t reach an agreement. In fact, after the first Trump administration proposed a 15 percent cap on indirect costs in 2017, Congress specifically prohibited such a move. But now that prohibition doesn’t seem likely to stick as lawmakers consider bills to fund the government for fiscal year 2026, so a new model is necessary. Adding to the pressure on universities, Trump has proposed significant cuts to research funding in his budget.

    JAG’s panel of experts presented two options to the university research community at a webinar last week and then answered questions at the town hall Tuesday. Colleges and universities have until June 22 to test the proposed models and provide feedback before JAG sends its final proposal to the government June 27, though any model will likely need additional work.

    “No one would choose to work at this rapid pace and rethink how to effectively, fairly and transparently cover these real and unavoidable costs,” said Matt Owens, president of the Council of Government Relations, at last week’s webinar. “But we are where we are, and it’s vital that we meet this moment so that we can emerge with an improved and sustainable indirect cost policy that will enable our country to continue leading the world in research and innovation.”

    Proposed Models

    Both versions of what JAG is calling the Fiscal Accountability in Research model, or FAIR, are geared toward offering more accountability and transparency about how federal research dollars are spent. JAG hopes that in the end, the new model will be simpler than the current one. They also want to nix terms like “indirect costs rate” and “overhead” for either essential research support or general research operations in an effort to underscore that the money goes toward the real costs of research.

    “This will require a bit of a culture change in institutions, but we think the benefit of that far outweighs the downsides,” said Kelvin Droegemeier, a professor and special adviser to the chancellor for science and policy at the University of Illinois at Urbana-Champaign, who led the JAG effort, at the webinar.

    One model, which the group calls FAIR No. 1, would include costs related to managing the grant, general research operations and facilities as a fixed percentage of the total budget. The percentage would be based in part on the type of institution and research. This approach is designed to be simple and reasonable, according to the group’s presentation, but it’s more general, which makes it “difficult to account for the wide array of research frameworks that now exist.”

    The other model, FAIR No. 2, would more accurately reflect the actual costs of a project and make the structure for federal grants more like those from private foundations. Under this model, essential research support would be lumped into the project costs while funding for general research operations, such as payroll and procurement, would be a fixed percentage of the total budget. That change would likely increase the direct costs of the project.

    Droegemeier and other members of JAG’s expert panel noted that FAIR No. 2 would be a “significant departure” from the current approach, and universities would likely need more time to overhaul their processes for tracking costs. Still, the group said this model would better show what the money goes toward, addressing a key concern from Congress.

    Droegemeier described the two models as “bookends” and said the group would probably end up somewhere between the two.

    ‘In a Good Spot’

    At Tuesday’s town hall, attendees questioned whether Congress or the Trump administration would even consider JAG’s proposal and why any change was necessary.

    Droegemeier said he’s met with members of Congress who have endorsed their process, and he’s kept in touch with Trump administration officials about the group’s work. So far, he’s seen a positive response to the models, adding that officials at the Office of Management and Budget indicated that they weren’t “oceans apart.”

    “We’ve done everything possible to build goodwill and trust,” he said. “There’s a long road ahead of us, but I think we’re in a good spot.”

    Other speakers echoed that point, noting that Sen. Susan Collins, a Republican from Maine and chair of the powerful Appropriations Committee, publicly supported the models at a recent hearing. And NIH director Jay Bhattacharya called the proposals “quite promising” at the same hearing, STAT News reported.

    Additionally, the House’s appropriations bill for the Department of Defense calls on the agency to “work closely with the extramural research community to develop an optimized Facilities and Administrative cost reimbursement solution for all parties that ensures the nation remains a world leader in innovation.”

    Across the board, speakers at the town hall said they must act to have a say in discussions about the future of research funding.

    “The two models are a significant change,” said Deborah Altenburg, vice president for research policy and advocacy at the Association of Public and Land-grant Universities. “But all of our organizations are responding to a new political situation.”

    Source link

  • Students learn the basics of AI as they weigh its use in their future careers

    Students learn the basics of AI as they weigh its use in their future careers

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    On a recent Thursday morning, Michael Taubman asked his class of seniors at North Star Academy’s Washington Park High School: “What do you think AI’s role should be in your future career?”

    “In school, like how we use AI as a tool and we don’t use it to cheat on our work … that’s how it should be, like an assistant,” said Amirah Falana, a 17-year-old interested in a career in real estate law.

    Fernando Infante, an aspiring software developer, agreed that AI should be a tool to “provide suggestions” and inform the work.

    “It’s like having AI as a partner rather than it doing the work,” said Infante during class.

    Falana and Infante are students in Taubman’s class called The Summit, a yearlong program offered to 93 seniors this year and expanding to juniors next year that also includes a 10-week AI course developed by Taubman and Stanford University.

    As part of the course, students use artificial intelligence tools – often viewed in a negative light due to privacy and other technical concerns – to explore their career interests and better understand how technology could shape the workforce. The class is also timely, as 92% of companies plan to invest in more AI over the next three years, according to a report by global consulting firm McKinsey and Company.

    The lessons provide students with hands-on exercises to better understand how AI works and how they can use it in their daily lives. They are also designed so teachers across subject areas can include them as part of their courses and help high school students earn a Google Career Certificate for AI Essentials, which introduces AI and teaches the basics of using AI tools.

    Students like Infante have used the AI and coding skills they learned in class to create their own apps while others have used them to create school surveys and spark new thoughts about their future careers. Taubman says the goal is to also give students agency over AI so they can embrace technological changes and remain competitive in the workfield.

    “One of the key things for young people right now is to make sure they understand that this technology is not inevitable,” Taubman told Chalkbeat last month. “People made this, people are making decisions about it, and there are pros and cons like with everything people make and we should be talking about this.”

    Students need to know the basics of AI, experts say

    As Generation Z, those born between 1997 and 2012, graduate high school and enter a workforce where AI is new, many are wondering how the technology will be used and to what extent.

    Nearly half of Gen Z students polled by The Walton Family Foundation and Gallup said they use AI weekly, according to the newly released survey exploring how youth view AI. (The Walton Family Foundation is a supporter of Chalkbeat. See our funders list here.) The same poll found that over 4 in 10 Gen Z students believe they will need to know AI in their future careers, and over half believe schools should be required to teach them how to use it.

    This school year, Newark Public Schools students began using Khan Academy’s AI chatbot tutor called Khanmigo, which the district launched as a pilot program last year. Some Newark teachers reported that the tutoring tool was helpful in the classroom, but the district has not released data on whether it helped raise student performance and test scores. The district in 2024 also launched its multimillion project to install AI cameras across school buildings in an attempt to keep students safe.

    But more than just using AI in school, students want to feel prepared to use it after graduating high school. Nearly 3 in 4 college students said their colleges or universities should be preparing them for AI in the workplace, according to a survey from Inside Higher Ed and College Pulse’s Student Voice series.

    Many of the challenges of using AI in education center on the type of learning approach used, accuracy, and building trust with the technology, said Nhon Ma, CEO of Numerade – an online learning assistant that uses AI and educators to help students learn STEM concepts. But that’s why it’s important to immerse students in AI to help them understand the ways it could be used and when to spot issues, Ma added.

    “We want to prepare our youth for this competitive world stage, especially on the technological front so they can build their own competence and confidence in their future paths. That could potentially lead towards higher earnings for them too,” Ma said.

    For Infante, the senior in Taubman’s class, AI has helped spark a love for computer science and deepened his understanding of coding. He used it to create an app that tracks personal milestones and goals and awards users with badges once they reach them. As an aspiring software developer, he feels he has an advantage over other students because he’s learning about AI in high school.

    Taubman also says it’s especially important for students to understand how quickly the technology is advancing, especially for students like Infante looking towards a career in technology.

    “I think it’s really important to help young people grapple with how this is new, but unlike other big new things, the pace is very fast, and the implications for career are almost immediate in a lot of cases,” Taubman added.

    Students learn that human emotions are important as AI grows

    It’s also important to remember the limitations of AI, Taubman said, noting that students need the basic understanding of how AI works in order to question it, identify any mistakes, and use it accordingly in their careers.

    “I don’t want students to lose out on an internship or job because someone else knows how to use AI better than they do, but what I really want is for students to get the internship or the job because they’re skillful with AI,” Taubman said.

    Through Taubman’s class, students are also identifying how AI increases the demand for skills that require human emotion, such as empathy and ethics.

    Daniel Akinyele, a 17-year-old senior, said he was interested in a career in industrial and organizational psychology, which focuses on human behavior in the workplace.

    During Taubman’s class, he used a custom AI tool on his laptop to explore different scenarios where he could use AI in his career. Many involved talking to someone about their feelings or listening to vocal cues that might indicate a person is sad or angry. Ultimately, psychology is a career about human connection and “that’s where I come into play,” Akinyele said.

    “I’m human, so I would understand how people are feeling, like the emotion that AI doesn’t see in people’s faces, I would see it and understand it,” Akinyele added.

    Falana, the aspiring real estate attorney, also used the custom AI tool to consider how much she should rely on AI when writing legal documents. Similar to writing essays in schools, Falana said professionals should use their original writing in their work but AI could serve as a launching pad.

    “I feel like the legal field should definitely put regulations on AI use, like we shouldn’t be able to, draw up our entire case using AI,” Falana said.

    During Taubman’s class, students also discussed fake images and videos created by AI. Infante, who wants to be a software developer, added that he plans to use AI regularly on the job but believes it should also be regulated to limit disinformation online.

    Taubman says it’s important for students to have a healthy level of skepticism when it comes to new technologies. He encourages students to think about how AI generates images, the larger questions around copyright infringement, and their training processes.

    “We really want them to feel like they have agency in this world, both their capacity to use these systems,” Taubman said, “but also to ask these broader questions about how they were designed.”

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more on AI in education, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Experts Weigh In on “Everyone” Cheating in College

    Experts Weigh In on “Everyone” Cheating in College

    Is something in the water—or, more appropriately, in the algorithm? Cheating—while nothing new, even in the age of generative artificial intelligence—seems to be having a moment, from the New York magazine article about “everyone” ChatGPTing their way through college to Columbia University suspending a student who created an AI tool to cheat on “everything” and viral faculty social media posts like this one: “I just failed a student for submitting an AI-written research paper, and she sent me an obviously AI-written email apologizing, asking if there is anything she can do to improve her grade. We are through the looking-glass, folks.”

    It’s impossible to get a true read on the situation by virality alone, as the zeitgeist is self-amplifying. Case in point: The suspended Columbia student, Chungin “Roy” Lee is a main character in the New York magazine piece. Student self-reports of AI use may also be unreliable: According to Educause’s recent Students and Technology Report, some 43 percent of students surveyed said they do not use AI in their coursework; 5 percent said they use AI to generate material that they edit before submitting, and just 1 percent said they submit generated material without editing it.

    There are certainly students who do not use generative AI and students who question faculty use of AI—and myriad ways that students can use generative AI to support their learning and not cheat. But the student data paints a different picture than the one presidents, provosts, deans and other senior leaders did in a recent survey by the American Association of Colleges and Universities and Elon University: Some 59 percent said cheating has increased since generative AI tools have become widely available, with 21 percent noting a significant increase—and 54 percent do not think their institution’s faculty are effective in recognizing generative Al–created content.

    In Inside Higher Ed’s 2025 Survey of Campus Chief Technology/Information Officers, released earlier this month, no CTO said that generative AI has proven to be an extreme risk to academic integrity at their institution. But most—three in four—said that it has proven to be a moderate (59 percent) or significant (15 percent) risk. This is the first time the annual survey with Hanover Research asked how concerns about academic integrity have actually borne out: Last year, six in 10 CTOs expressed some degree of concern about the risk generative AI posed to academic integrity.

    Stephen Cicirelli, the lecturer of English at Saint Peter’s University whose “looking glass” post was liked 156,000 times in 24 hours last week, told Inside Higher Ed that cheating has “definitely” gotten more pervasive within the last semester. But whether it’s suddenly gotten worse or has been steadily growing since large language models were introduced to the masses in late 2022, one thing is clear: AI-assisted cheating is a problem, and it won’t get better on its own.

    So what can institutions do about it? Drawing on some additional insights from the CTO survey and advice from other experts, we’ve compiled a list of suggestions below. The expert insights, in particular, are varied. But a unifying theme is that cheating in the age of generative AI is as much a problem requiring intervention as it is a mirror—one reflecting larger challenges and opportunities within higher education.

    (Note: AI detection tools did not make this particular list. Even though they have fans among the faculty, who tend to point out that some tools are more accurate than others, such tools remain polarizing and not entirely foolproof. Similarly, banning generative AI in the classroom did not make the list, though this may still be a widespread practice: 52 percent of students in the Educause survey said that most or all of their instructors prohibit the use of AI.)

    Academic Integrity for Students

    The American Association of Colleges and Universities and Elon University this month released the 2025 Student Guide to Artificial Intelligence under a Creative Commons license. The guide covers AI ethics, academic integrity and AI, career plans for the AI age, and an AI toolbox. It encourages students to use AI responsibly, critically assess its influence and join conversations about its future. The guide’s seven core principles are:

    1. Know and follow your college’s rules
    2. Learn about AI
    3. Do the right thing
    4. Think beyond your major
    5. Commit to lifelong learning
    6. Prioritize privacy and security
    7. Cultivate your human abilities

    Connie Ledoux Book, president of Elon, told Inside Higher Ed that the university sought to make ethics a central part of the student guide, with campus AI integration discussions revealing student support for “open and transparent dialogue about the use of AI.” Students “also bear a great deal of responsibility,” she said. They “told us they don’t like it when their peers use AI to gain unfair advantages on assignments. They want faculty to be crystal clear in their syllabi about when and how AI tools can be used.”

    Now is a “defining moment for higher education leadership—not only to respond to AI, but to shape a future where academic integrity and technological innovation go hand in hand,” Book added. “Institutions must lead with clarity, consistency and care to prepare students for a world where ethical AI use is a professional expectation, not just a classroom rule.”

    Mirror Logic

    Lead from the top on AI. In Inside Higher Ed’s recent survey, just 11 percent of CTOs said their institution has a comprehensive AI strategy, and roughly one in three CTOs (35 percent) at least somewhat agreed that their institution is handling the rise of AI adeptly. The sample size for the survey is 108 CTOs—relatively small—but those who said their institution is handling the rise of AI adeptly were more likely than the group over all to say that senior leaders at their institution are engaged in AI discussions and that effective channels exist between IT and academic affairs for communication on AI policy and other issues (both 92 percent).

    Additionally, CTOs who said that generative AI had proven to be a low to nonexistent risk to academic integrity were more likely to report having some kind of institutionwide policy or policies governing the use of AI than were CTOs who reported a moderate or significant risk (81 percent versus 64 percent, respectively). Leading on AI can mean granting students institutional access to AI tools, the rollout of which often includes larger AI literacy efforts.

    (Re)define cheating. Lee Rainie, director of the Imagining the Digital Future Center at Elon, said, “The first thing to tackle is the very definition of cheating itself. What constitutes legitimate use of AI and what is out of bounds?” In the AAC&U and Elon survey that Rainie co-led, for example, “there was strong evidence that the definitional issues are not entirely resolved,” even among top academic administrators. Leaders didn’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—“the verdict was completely split,” Rainie said. Clearly, it’s “a perfect recipe for confusion and miscommunication.”

    Rainie’s additional action items, with implications for all areas of the institution:

    1. Create clear guidelines for appropriate and inappropriate use of AI throughout the university.
    2. Include in the academic code of conduct a “broad statement about the institution’s general position on AI and its place in teaching and learning,” allowing for a “spectrum” of faculty positions on AI.
    3. Promote faculty and student clarity as to the “rules of the road in assignments.”
    4. Establish “protocols of proof” that students can use to demonstrate they did the work.

    Rainie suggested that CTOs, in particular, might be useful regarding this last point, as such proof could include watermarking content, creating NFTs and more.

    Put it in the syllabus! (And in the institutional DNA.) Melik Khoury, president and CEO of Unity Environmental University in Maine, who’s publicly shared his thoughts on “leadership in an intelligent era of AI,” including how he uses generative AI, told Inside Higher Ed that “AI is not cheating. What is cheating is our unwillingness to rethink outdated assessment models while expecting students to operate in a completely transformed world. We are just beginning to tackle that ourselves, and it will take time. But at least we are starting from a position of ‘We need to adapt as an institution,’ and we are hiring learning designers to help our subject matter experts adapt to the future of learning.”

    As for students, Khoury said the university has been explicit “about what AI is capable of and what it doesn’t do as well or as reliably” and encourages them to recognize their “agency and responsibility.” Here’s an excerpt of language that Khoury said appears in every course syllabus:

    • “You are accountable for ensuring the accuracy of factual statements and citations produced by generative AI. Therefore, you should review and verify all such information prior to submitting any assignment.
    • “Remember that many assignments require you to use in-text citations to acknowledge the origin of ideas. It is your responsibility to include these citations and to verify their source and appropriateness.
    • “You are accountable for ensuring that all work submitted is free from plagiarism, including content generated with AI assistance.
    • “Do not list generative AI as a co-author of your work. You alone are responsible.”

    Additional policy language recommends that students:

    • Acknowledge use of generative AI for course submissions.
    • Disclose the full extent of how and where they used generative AI in the assignment.
    • Retain a complete transcript of generative AI usage (including source and date stamp).

    “We assume that students will use AI. We suggest constructive ways they might use it for certain tasks,” Khoury said. “But, significantly, we design tasks that cannot be satisfactorily completed without student engagement beyond producing a response or [just] finding the right answer—something that AI can do for them very easily.”

    In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college.”

    —Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi

    Design courses with and for AI. Keith Quesenberry, professor of marketing at Messiah University in Pennsylvania, said he thinks less about cheating, which can create an “adversarial me-versus-them dynamic,” and more about pedagogy. This has meant wrestling with a common criticism of higher education—that it’s not preparing students for the world of work in the age of AI—and the reality that no one’s quite sure what that future will look like. Quesenberry said he ended up spending all of last summer trying to figure out how “a marketer should and shouldn’t use AI,” creating and testing frameworks, ultimately vetting his own courses’ assignments: “I added detailed instructions for how and how not to use AI specifically for that assignment’s tasks or requirements. I also explain why, such as considering whether marketing materials can be copyrighted for your company or client. I give them guidance on how to cite their AI use.” He also created a specialized chat bot to which students can upload approved resources to act as an AI tutor.

    Quesenberry also talks to students about learning with AI “from the perspective of obtaining a job.” That is, students need a foundation of disciplinary knowledge on which to create AI prompts and judge output. And they can’t rely on generative AI to speak or think for them during interviews, networking and with clients.

    There are “a lot of professors quietly working very hard to integrate AI into their courses and programs that benefit their disciplines and students,” he adds. One thing that would help them, in Quesenberry’s view? Faculty institutional access to the most advanced AI tools.

    Give faculty time and training. Tricia Bertram Gallant, director of the academic integrity office and Triton Testing Center at the University of California, San Diego, and co-author of the new book The Opposite of Cheating: Teaching for Integrity in the Age of AI (University of Oklahoma Press), said that cheating part of human nature—and that faculty need time, training and support to “design educational environments that make cheating the exception and integrity the norm” in this new era of generative AI.

    Faculty “cannot be expected to rebuild the plane while flying it,” she said. “They need course release time to redesign that same course, or they need a summer stipend. They also need the help of those trained in pedagogy, assessment design and instructional design, as most faculty did not receive that training while completing their Ph.D.s.” Gallant also floated the idea of AI fellows, or disciplinary faculty peers who are trained on how to use generative AI in the classroom and then to “share, coach and mentor their peers.”

    Students, meanwhile, need training in AI literacy, “which includes how to determine if they’re using it ethically or unethically. Students are confused, and they’re also facing immense temptations and opportunities to cognitively offload to these tools,” Gallant added.

    Teach first-year students about AI literacy. Chris Ostro, an assistant teaching professor and instructional designer focused on AI at the University of Colorado at Boulder, offers professional development on his “mosaic approach” to writing in the classroom—which includes having students sign a standardized disclosure form about how and where they’ve used AI in their assignments. He told Inside Higher Ed that he’s redesigned his own first-year writing course to address AI literacy, but he is concerned about students across higher education who may never get such explicit instruction. For that reason, he thinks there should be mandatory first-year classes for all students about AI and ethics. “This could also serve as a level-setting opportunity,” he said, referring to “tech gaps,” or the effects of the larger digital divide on incoming students.

    Regarding student readiness, Ostro also said that most of the “unethical” AI use by students is “a form of self-treatment for the huge and pervasive learning deficits many students have from the pandemic.” One student he recently flagged for possible cheating, for example, had largely written an essay on her own but then ran it through a large language model, prompting it to make the paper more polished. This kind of use arguably reflects some students’ lack of confidence in their writing skills, not an outright desire to offload the difficult and necessary work of writing to think critically.

    Think about grading (and why students cheat in the first place). Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi, co-wrote an essay two years ago with two students about why students cheat. They said much of it came down to an overemphasis on grades: “Students are more likely to engage in academic dishonesty when their focus, or the perceived focus of the class, is on grading.” The piece proposed the following solutions, inspired by the larger trend of ungrading:

    1. Allow students to reattempt or revise their work.
    2. Refocus on formative feedback to improve rather than summative feedback to evaluate.
    3. Incorporate self-assessment.

    Donahoe said last week, “I stand by every claim that we make in the 2023 piece—and it all feels heightened two years later.” The problems with AI misuse “have become more acute, and between this and the larger sociopolitical climate, instructors are reaching unsustainable levels of burnout. The actions we recommend at the end of the piece remain good starting points, but they are by no means solutions to the big, complex problem we’re facing.”

    Framing cheating as a structural issue, Donahoe said students have been “conditioned to see education as a transaction, a series of tokens to be exchanged for a credential, which can then be exchanged for a high-paying job—in an economy where such jobs are harder and harder to come by.” And it’s hard to fault students for that view, she continued, as they receive little messaging to the contrary.

    Like the problem, the solution set is structural, Donahoe explained: “In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college. Smaller class sizes in which students and teachers can form real relationships; more time, training and support for instructors; fundamental changes to how we grade and how we think about grades; more public funding for education so that we can make these things happen.”

    With none of this apparently forthcoming, faculty can at least help reorient students’ ideas about school and try to “harness their motivation to learn.”

    Source link

  • Lawyers in New Jersey School Segregation Case Want Appellate Court to Weigh in – The 74

    Lawyers in New Jersey School Segregation Case Want Appellate Court to Weigh in – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Attorneys representing a group of New Jersey parents and activist groups are asking a state appellate court to weigh in on a case that could reshape the state’s public education system.

    At the center of the fight is whether New Jersey schools are unconstitutionally segregated by race and socioeconomic status. A lower court judge in October 2023 acknowledged the state’s public schools are segregated by race and that the state must act, but also found that the plaintiffs had failed to prove the entire system is segregated across all its districts.

    The parents’ attorneys filed a motion last week with the state’s appellate division asking it to hear the case.

    “It is imperative that no more students be deprived of these rights by the trial court’s avoidance of the straightforward conclusion compelled by the facts and the law in this case — that the state defendants, who are legally obligated to take action to desegregate public schools regardless of the reasons for that segregation, have acted unconstitutionally by failing to do so,” the attorneys wrote in the filing.

    Gov. Phil Murphy and the state Department of Education have until April 28 to respond to the plaintiffs’ new filing. A spokesman for the Murphy administration declined to comment.

    News of the new filing was first reported by Chalkbeat Newark.

    The case dates to 2018, when the Latino Action Network, the NAACP New Jersey State Conference, and several other families and groups sued the state alleging New Jersey failed to address de facto segregation in public schools. The plaintiffs cited data showing that nearly half of all Black and Latino students in New Jersey attend schools that are more than 90% non-white, in districts that are often just blocks from predominantly white districts.

    In New Jersey, students typically attend schools in the municipality where they live. Plaintiffs argued that long-standing housing policies that led to segregated residential neighborhoods led to segregated schools also. New Jersey is the seventh-most segregated state for Black and Latino students, the plaintiffs say.

    In October 2023, after Superior Court Judge Robert Lougy issued his ruling that acknowledged racial segregation in New Jersey schools but said it was not widespread, both sides entered mediation talks in hopes it would resolve more quickly than continued litigation.

    Attorneys for the parties said in February that it’s unlikely continuing the talks would “be constructive.”

    The plaintiffs’ attorneys say the lower court’s October ruling should be reversed. They want a judge to review what they say are six errors in the 2023 order, like the fact that Lougy did not identify a disputed fact.

    “Rather than reach the only logical conclusion that followed — that the state defendants violated plaintiffs’ constitutional rights — the trial court left the question of liability for another day,” the filing reads.

    If the appellate court denies the motion, the case would return to the trial court, or could be appealed to the state Supreme Court.

    New Jersey Monitor is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. New Jersey Monitor maintains editorial independence. Contact Editor Terrence T. McDonald for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • Three States Weigh Changes to Presidential Search Processes

    Three States Weigh Changes to Presidential Search Processes

    Three states are considering changes to how public universities hire presidents via legislation to provide more, and in the case of Utah, less transparency in executive searches.

    Florida, Utah and Washington are all weighing changes, driven by state legislators in response to recent presidential searches. In the case of Florida, the move comes after the state, known for broad open records laws, revised presidential search processes in recent years in ways that narrowed transparency, which was followed by an influx of hires connected to conservative politics.

    Evergreen State legislators have proposed the changes following closed-door presidential searches at both the University of Washington and Washington State University, which they argue lacked adequate transparency because finalists were not named during the process. (However, they’ve backed off an initial bill to require universities to name finalists.)

    In Utah, lawmakers have crafted legislation to limit information on presidential searches. Current state law requires public universities to release the name of three to five finalists for presidential positions, but that could change with only a single finalist unveiled at the end of the search.

    The legislative proposals reflect a broader debate over how much transparency should be built into presidential searches and the politics of hiring processes.

    Florida’s Proposed Reversal

    The Florida Legislature passed a bill in 2022 that allowed institutions to keep the identity of applicants secret until a university identified three finalists. The change marked an about-face from prior practices, in which lists of applicants were released as part of presidential searches.

    Although the law passed in 2022 charged universities with naming three finalists, in practice it has often meant that institutions only release the name of one applicant at the end of the process. And since the passage of that legislation, Florida has tapped numerous Republican lawmakers to lead public universities, including former Nebraska U.S. senator Ben Sasse at the University of Florida, who stepped down after less than 18 months amid questions about his spending.

    Since Sasse’s exit, critics have alleged UF’s board missed or ignored multiple red flags.

    Governing boards have hired numerous other Republican former lawmakers to lead institutions since 2022. Recent hires include Adam Hasner at Florida Atlantic University, Jeanette Nuñez at Florida International University (who stepped down as lieutenant governor to take the job), Richard Corcoran at New College of Florida, Fred Hawkins at South Florida State College, Mel Ponder at Northwest Florida State College and Torey Alston at Broward College.

    In a statement on why she filed the bill to open search processes, Florida representative Michelle Salzman, a Republican, wrote that the legislation would ensure “our higher education institutions are governed in a transparent and ethical manner, with the best interests of our students and taxpayers as the guiding principle.”

    To Judith Wilde, a research professor at George Mason University who studies presidential searches and contracts, the bill seems like backlash to Republican governor Ron DeSantis, who many critics allege has used a heavy hand in installing GOP officials as college presidents.

    “They are definitely moving away from the more secretive and controlled processes that they’ve been under for the last few years. I’d say that is because so many people now are tired and upset with DeSantis putting in place his personal choices and how badly that has worked out,” Wilde said.

    Last year, the Florida Board of Governors also gave itself more authority over presidential searches, adopting a policy that requires its chair to approve a list of finalists before candidates are submitted to individual governing boards. That would go away under proposed legislation.

    The Florida Board of Governors did not respond to an inquiry about its position on the bill.

    Washington Backtracks

    Washington lawmakers initially proposed changes to how presidential searches are conducted with a bill that would require public universities to name “up to four priority candidates” for the job. But lawmakers backed off that idea, submitting a substitute bill that would expand voting rights for students and faculty on presidential searches, but not require finalists to be named.

    University of Washington officials had expressed concern about the initial proposed legislation, arguing that UW could lose highly qualified candidates if they can’t keep names confidential.

    “It’s important to understand that sitting presidents or chancellors participate in these processes at considerable professional risk. There may be reputational damage up to termination, even if their candidacy is unsuccessful, should their present employers learn that they are pursuing other employment,” Blaine Tamaki, chair of UW’s Board of Regents, wrote in a statement.

    A UW spokesperson also pointed to fallout in 2020 in the University of Alaska system when then-president Jim Johnsen stepped down after he emerged as the sole finalist to lead the University of Wisconsin system. Johnsen withdrew from the Wisconsin search after criticism that the process lacked transparency. He then resigned from the Alaska presidency mere weeks later.

    (Johnsen’s tenure at Alaska was heavily scrutinized while he was there, however, and many students and faculty members expressed relief that he planned to leave for another job, which Wilde suggested was more of a factor in his abrupt exit than his candidacy for the Wisconsin position.)

    Washington State University had also expressed concerns about the initial bill.

    “Specifically, we were apprehensive about losing strong candidates who would be unwilling to make their names public before a selection was announced,” a Washington State spokesperson wrote to Inside Higher Ed by email. “There is a very real concern for some candidates that they would lose their effectiveness at their home institutions if it became public that they were exploring employment opportunities elsewhere. This is particularly true for sitting presidents.”

    Washington State has expressed support for the new bill.

    Opacity in Utah?

    State Senator Chris Wilson, the Utah Republican who sponsored the bill to overhaul presidential searches, has argued the law needs to be changed so public universities don’t lose quality candidates who are unwilling to go through a process that exposes their identity.

    The Utah legislation seems at least partly inspired by Elizabeth Cantwell exiting the presidency at Utah State University last month to take the top job at Washington State.

    Wilson has pointed to Cantwell departing for Washington State as an example of why the bill is needed. Last month, in a House Education Committee meeting, Wilson stressed the need for confidentiality in searches and argued, “There’s no way the president of Utah State University would have applied for the presidency of Washington State if it wasn’t a private process.”

    Utah commissioner of higher education Geoff Landward has cast doubt on the notion that public universities in the state have lost applicants due to current processes.

    “I can confidently say that we have not had a single search wherein we were talking to very high-quality candidates who essentially said that they would be interested and willing to apply, were it not for the fact that the final three candidates would have to be public because that would put their current employment in peril unnecessarily,” Landward told lawmakers in February.

    But, he added, “This is a question of who did we not get to consider?”

    Wilde is skeptical of Utah’s proposal. She points to an example at Montana State University in 2019 when President Waded Cruzado informed the board that she was being recruited for another job. In response, the board gave Cruzado a $150,000 pay raise to entice her to stay—and it worked.

    “Just because they’re in the job market doesn’t necessarily mean that they want to leave the university,” Wilde said. “And if they’re doing a good job, make the effort to keep them.”

    Source link