Tag: Career

  • Teaching Alongside Generative AI for Student Success

    Teaching Alongside Generative AI for Student Success

    A growing share of colleges and universities are embedding artificial intelligence tools and AI literacy into the curriculum with the intent of aiding student success. A 2025 Inside Higher Ed survey of college provosts found that nearly 30 percent of respondents have reviewed curriculum to ensure that it will prepare students for AI in the workplace, and an additional 63 percent say they have plans to review curriculum for this purpose.

    Touro University in New York is one institution that’s incentivizing faculty to engage with AI tools, including embedding simulations into academic programs.

    In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with Shlomo Argamon, associate provost for artificial intelligence at Touro, to discuss the university policy for AI in the classroom, the need for faculty and staff development around AI, and the risks of gamification of education.

    An edited version of the podcast appears below.

    Q: How are you all at Touro thinking about AI? Where is AI integrated into your campus?

    Shlomo Argamon, associate provost for artificial intelligence at Touro University

    A: When we talk about the campus of Touro, we actually have 18 or 19 different campuses around the country and a couple even internationally. So we’re a very large and very diverse organization, which does affect how we think about AI and how we think about issues of the governance and development of our programs.

    That said, we think about AI primarily as a new kind of interactive technology, which is best seen as assistive to human endeavors. We want to teach our students both how to use AI effectively in what they do, how to understand and properly mitigate and deal with the risks of using AI improperly, but above all, to always think about AI in a human context.

    When we think about integrating AI for projects, initiatives, organizations, what have you, we need to first think about the human processes that are going to be supported by AI and then how AI can best support those processes while mitigating the inevitable risks. That’s really our guiding philosophy, and that’s true in all the ways we’re teaching students about AI, whether we’re teaching students specifically, deeply technical [subjects], preparing them for AI-centric careers or preparing them to use AI in whatever other careers they may pursue.

    Q: When it comes to teaching about AI, what is the commitment you all make to students? Is it something you see as a competency that all students need to gain or something that is decided by the faculty?

    A: We are implementing a combination—a top-down and a bottom-up approach.

    One thing that is very clear is that every discipline, and in fact, every course and faculty member, will have different needs and different constraints, as well as competencies around AI that are relevant to that particular field, to that particular topic. We also believe there’s nobody that knows the right way to teach about AI, or to implement AI, or to develop AI competencies in your students.

    We need to encourage and incentivize all our faculty to be as creative as possible in thinking about the right ways to teach their students about AI, how to use it, how not to use it, etc.

    So No. 1 is, we’re encouraging all of our faculty at all levels to be thinking and developing their own ideas about how to do this. That said, we also believe very firmly that all students, all of our graduates, need to have certain fundamental competencies in the area of AI. And the way that we’re doing this is by integrating AI throughout our general education curriculum for undergraduates.

    Ultimately, we believe that most, if not all, of our general education courses will include some sort of module about AI, teaching students specifically about the AI-relevant competencies that are relevant to those particular topics that they’re learning, whether it’s writing, reading skills, presentations, math, science, history, the different kinds of cognition and skills that you learn in different fields. What are the AI competencies that are relevant to that, and to have them learning that.

    So No. 1, they’re learning it not all at once. And also, very importantly, it’s not isolated from the topics, from the disciplines that they’re learning, but it’s integrated within them so that they see it as … part of writing is knowing how to use AI in writing and also knowing how not to. Part of learning history is knowing how to use AI for historical research and reasoning and knowing how not to use it, etc. So we’re integrating that within our general education curriculum.

    Beyond that, we also have specific courses in various AI skills, both at the undergraduate [and] at the graduate level, many of which are designed for nontechnical students to help them learn the skills that they need.

    Q: Because Touro is such a large university and it’s got graduate programs, online programs, undergraduate programs, I was really surprised that there is an institutional AI policy.

    A lot of colleges and universities have really grappled with, how do we institutionalize our approach to AI? And some leaders have kind of opted out of the conversation and said, “We’re going to leave it to the faculty.” I wonder if we could talk about the AI policy development and what role you played in that process, and how that’s the overarching, guiding vision when it comes to thinking about students using and engaging with AI?

    A: That’s a question that we have struggled with, as all academic leaders, as you mentioned, struggle with this very question.

    Our approach is to create policy at the institutional level that provides only the necessary guardrails and guidance that then enables each of our schools, departments and individual faculty members to implement the correct solutions for them in their particular areas, within this guidance and these guardrails so that it’s done safely and so that we know that it’s going, over all, in a positive and also institutionally consistent direction to some extent.

    In addition, one of the main functions of my office is to provide support to the schools, departments and especially the faculty members to make this transition and to develop what they need.

    It’s an enormous burden on faculty members to shift, not just to add AI content to their classes, if they do so, but to shift the way that we teach, the way that we do assessments. The way that we relate to our students, even, has to shift, to change, and it creates a burden on them.

    It’s a process to develop resources, to develop ways of doing this. I and the people that work in our office, we have regular office hours to talk to faculty, to work with them. One of the most important things that we do, and we spend a lot of time and effort on this, is training for our faculty, for our staff on AI, on using AI, on teaching about AI, on the risks of AI, on mitigating those risks, how to think about AI—all of these things. It all comes down to making sure that our faculty and staff, they are the university, and they’re the ones who are going to make all of this a success, and it’s up to us to give them the tools that they need to make this a success.

    I would say that while in many questions, there are no right or wrong answers, there are different perspectives and different opinions. I think that there is one right answer to “What does a university need to do institutionally to ensure success at dealing with the challenge of AI?” It’s to support and train the faculty and staff, who are the ones who are going to make whatever the university does a success or a failure.

    Q: Speaking of faculty, there was a university faculty innovation grant program that sponsored faculty to take on projects using AI in the classroom. Can you talk a little bit about that and how that’s been working on campus?

    A: We have an external donor who donated funds so that we were able to award nearly 100 faculty innovation challenge grants for developing methods of integrating AI into teaching.

    Faculty members applied and did development work over the summer, and they’re now implementing in their fall courses right now. We’re right now going through the initial set of faculty reports on their projects, and we have projects from all over the university in all different disciplines and many different approaches to looking at how to use AI.

    At the beginning of next spring, we’re going to have a conference workshop to bring everybody together so we can share all of the different ways that people try to do this. Some experiments, I’m sure, will not have worked, but that’s also incredibly important information, because what we’re seeking to do [is], we’re seeking to help our students, but we’re also seeking to learn what works, what doesn’t work and how to move forward.

    Again, this goes back to our philosophy that we want to unleash the expertise, intelligence, creativity of our faculty—not top down to say, “We have an AI initiatives. This is what you need to be doing”—but, instead, “Here’s something new. We’ll give you the tools, we’ll give you the support. We’ll give you the funding to make something happen, make interesting things happen, make good things for your students happen, and then let’s talk about it and see how it worked, and keep learning and keep growing.”

    Q: I was looking at the list of faculty innovation grants, and I saw that there were a few other simulations. There was one for educators helping with classroom simulations. There was one with patient interactions for medical training. It seems like there’s a lot of different AI simulations happening in different courses. I wonder if we can talk about the use of AI for experiential learning and why that’s such a benefit to students.

    A: Ever since there’s been education, there’s been this kind of distinction between book learning and real-world learning, experiential learning and so forth. There have always been those who have questioned the value of a college education because you’re just learning what’s in the books and you don’t really know how things really work, and that criticism has some validity.

    But what we’re trying to do and what AI allows us to do [is], it allows us and our students to have more and more varied experiences of the kinds of things they’re trying to learn and to practice what they’re doing, and then to get feedback on a much broader level than we could do before. Certainly, whenever you had a course in say, public speaking, students would get up, do some public speaking, get feedback and proceed. Now with AI, students can practice in their dorm rooms over and over and over again and get direct feedback; that feedback and those experiences can be made available then to the faculty member, who can then give the students more direct and more human or concentrated or expert feedback on their performance based on this, and it just scales.

    In the medical field, this is where it’s hugely, hugely important. There’s a long-standing institution in medical education called the standardized patient. Traditionally it’s a human actor who learns to act as a patient, and they’re given the profile of what disorders they’re supposed to have and how they’re supposed to act, and then students can practice, whether they’re diagnostic skills, whether they’re questions of student care and bedside manner, and then get expert feedback.

    We now have, to a large extent, AI systems that can do this, whether it’s interactive in a text-based simulation, voice-based simulation. We also have robotic mannequins that the students can work with that are AI-powered with AI doing conversation. Then they can be doing physical exams on the mannequins that are simulating different kinds of conditions, and again, this gives the possibility of really just scaling up this kind of experiential learning. Another kind of AI that has been found useful in a number of our programs, particularly in our business program, are AI systems that watch people give presentations and can give you real-time feedback, and that works quite well.

    Q: These are interesting initiatives, because it cuts out the middleman of needing a third party or maybe a peer to help the student practice the experience. But in some ways, does it gamify it too much? Is it too much like video games for students? How have you found that these are realistic enough to prepare students?

    A: That is indeed a risk, and one that we need to watch. As in nearly everything that we’re doing, there are risks that need to be managed and cannot be solved. We need to be constantly alert and watching for these risks and ensuring that we don’t overstep one boundary or another.

    When you talk about the gamification, or the video game nature of this, the artificial nature of it, there are really two pieces to it. One piece is the fact that there is no mannequin that exists, at least today, that can really simulate what it’s like to examine a human being and how the human being might react.

    AI chatbots, as good as they are, will not now and in the near, foreseeable future, at least, be able to simulate human interactions quite accurately. So there’s always going to be a gap. What we need to do, as with other kinds of education, you read a book, the book is not going to be perfect. Your understanding of the book is not going to be perfect. There has to be an iterative process of learning. We have to have more realistic simulations, different kinds of simulations, so the students can, in a sense, mentally triangulate their different experiences to learn to do things better. That’s one piece of it.

    The other piece, when you say gamification, there’s the risk that it turns into “I’m trying to do something to stimulate getting the reward or the response here or there.” And there’s a small but, I think, growing research literature on gamification of education, where if you gamify a little bit too much, it becomes more like a slot machine, and you’re learning to maneuver the machine to give you the dopamine hits or whatever, rather than really learning the content of what you’re doing. The only solution to that is for us to always be aware of what we’re doing and how it’s affecting our students and to adjust what we’re doing to avoid this risk.

    This goes back to one of the key points: Our whole philosophy of this is to always look at the technology and the tools, whether AI or anything else, as embedded within a larger human context. The key here is understanding when we implement some educational experience for students, whether it involves AI or technology or not, it’s always creating incentives for the students to behave in a certain way. What are those incentives, and are those incentives aligned with the educational objectives that we have for the students? That’s the question that we always need to be asking ourselves and also observing, because with AI, we don’t entirely know what those incentives are until we see what happens. So we’re constantly learning and trying to figure this out as we go.

    If I could just comment on that peer-to-peer simulation: Medical students poking each other or social work students interviewing each other for a social work kind of exam has another important learning component, because the student that is being operated upon is learning what it’s like to be in the other shoes, what it’s like to be the patient, what it’s like to be the object of investigation by the professional. And empathy is an incredibly important thing, and understanding what it’s like for them helps the students to learn, if done properly, to do it better and to have the appropriate sort of relationship with their patients.

    Q: You also mentioned these simulations give the faculty insight into how the student is performing. I wonder if we can talk about that; how is that real-time feedback helpful, not only for the student but for the professor?

    A: Now, one thing that needs to be said is that it’s very difficult, often, to understand where all of your students are in the learning process, what specifically they need. We can be deluged by data, if we so choose, that may confuse more than enlighten.

    That said, the data that come out of these systems can definitely be quite useful. One example is there are some writing assistance programs, Grammarly and their ilk, that can provide the exact provenance of writing assignments to the faculty, so it can show the faculty exactly how something was composed. Which parts did they write first? Which parts did they write second? Maybe they outlined it, then they revised this and they changed this, and then they cut and pasted it from somewhere else and then edited.

    All of those kinds of things that gives the faculty member much more detailed information about the student’s process, which can enable the faculty to give the students much more precise and useful feedback on their own learning. What do they perhaps need to be doing differently? What are they doing well? And so forth. Because then you’re not just looking at a final paper or even at a couple of drafts and trying to infer what the student was doing so that you can give them feedback, but you can actually see that more or less in real time.

    That’s the sort of thing where the data can be very useful. And again, I apologize if I sound like a broken record. It all goes back to the human aspect of this, and to use data that helps the faculty member to see the individual student with their own individual ways of thinking, ways of behaving, ways of incorporating knowledge, to be able to relate to them more as an individual.

    Briefly and parenthetically, one of the great hopes that we have for integrating AI into the educational process is that AI can help to take away many of the bureaucratic and other burdens that faculty are burdened with, and free them and enable them in different ways to enhance their human relationship with their students, so that we can get back to the core of education. Which really, I believe, is the transfer of knowledge and understanding through a human relationship between teacher and student.

    It’s not what might be termed the “jug metaphor” for education, where I, the faculty member, have a jug full of knowledge, and I’m going to pour it into your brain, but rather, I’m going to develop a relationship with you, and through this relationship, you are going to be transformed, in some sense.

    Q: This could be a whole other podcast topic, but I want to touch on this briefly. There is a risk sometimes when students are using AI-powered tools and faculty are using AI-powered tools that it is the AI engaging with itself and not necessarily the faculty with the students. When you talk about allowing AI to lift administrative burdens or ensure that faculty can connect with students, how can we make sure that it’s not robot to robot but really person to person?

    A: That’s a huge and a very important topic, and one which I wish that I had a straightforward and direct and simple answer for. This is one of those risks that has to be mitigated and managed actively and continually.

    One of the things that we emphasize in all our trainings for faculty and staff and all our educational modules for students about AI is the importance of the AI assisting you, rather than you assisting the AI. If the AI produces some content for you, it has to be within a process in which you’re not just reviewing it for correctness, but you’re producing the content where it’s helping you to do so in some sense.

    That’s a little bit vague, because it plays out differently in different situations, and that’s the case for faculty members who are producing a syllabus or using AI to produce other content for the courses to make sure that it’s content that they are producing with AI. Same thing for the students using AI.

    For example, our institutional AI policy having to do with academic honesty and integrity, is, I believe, groundbreaking in the sense that our default policy for courses that don’t have a specific policy regarding the use of AI in that course—by next spring, all courses must have a specific policy—is that AI is allowed to be used by students for a very wide variety of tasks on their assignments.

    You can’t use AI to simply do your assignment for you. That is forbidden. The key is the work has to be the work of the student, but AI can be used to assist. Through establishing this as a default policy—which faculty, department chairs, deans have wide latitude to define more or less restrictive policies with specific carve-outs, simply because every field is different and the needs are different—the default and the basic attitude is, AI is a tool. You need to learn to use it well and responsibly, whatever you do.

    Q: I wanted to talk about the future of AI at the university. Are there any new initiatives you should tell our listeners about? How are you all thinking about continuing to develop AI as a teaching and learning tool?

    A: It’s hard for me to talk about specific initiatives, because what we’re doing is we believe that it’s AI within higher education particularly, but I think in general as well, it’s fundamentally a start-up economy in the sense that nobody, and I mean nobody, knows what to do with it, how to deal with it, how does it work? How does it not work?

    Therefore, our attitude is that we want to have it run as many experiments as we can, to try as many different things as we can, different ways of teaching students, different ways of using AI to teach. Whether it’s through simulations, content creation, some sort of AI teaching assistants working with faculty members, whether it’s faculty members coming up with very creative assignments for students that enable them to learn the subject matter more deeply by AI assisting them to do very difficult tasks, perhaps, or tasks that require great creativity, or something like that.

    The sky is the limit, and we want all of our faculty to experiment and develop. We’re seeking to create that within the institution. Touro is a wonderful institution for that, because we already have the basic institutional culture for this, to have an entrepreneurial culture within the university. So the university as a whole is an entrepreneurial ecosystem for experimenting and developing ways of teaching about and with and through AI.

    Source link

  • Writing Classes Are About Writing, Not AI-Aided Production

    Writing Classes Are About Writing, Not AI-Aided Production

    I had more important things to do.

    The assignment was dumb and seemed pointless.

    I don’t care about this class.

    I had too much stuff to do and it was just easier to check something off the list.

    I had to work.

    I didn’t understand the assignment.

    Everyone else is using it and they’re doing fine.

    I was pretty sure [the LLM] would do a better job than me.

    Source link

  • Northwestern, Cornell Still Working to Unfreeze Federal Funds

    Northwestern, Cornell Still Working to Unfreeze Federal Funds

    Photo illustration by Justin Morrison/Inside Higher Ed | arlutz73 and Wolterk/iStock/Getty Images

    Thanks to a series of settlements and court orders, some universities that had their grants frozen by the Trump administration earlier this year have seen that funding restored.

    But others are still trying to unfreeze the grants and learn more about why they were suspended in the first place.

    Since March, the Trump administration has said that it put nearly $6 billion on hold at nine universities. Three universities—Columbia, Penn and Brown—cut deals with the administration to restore the funding, while the University of California, Los Angeles, and Harvard got the money back via court orders. The fate of the remaining four freezes—at Duke, Cornell, Northwestern and Princeton Universities—remains uncertain.

    Princeton has seen about half of its frozen grants restored, President Christopher Eisgruber told the alumni magazine in late August. Roughly $200 million was put on hold initially.

    Eisgruber said Princeton never learned why the funds were frozen, beyond media reports that connected it to concerns over antisemitism on campus. A Princeton spokesperson confirmed the magazine’s report but declined to share more details about the status of the remaining grants.

    At Northwestern, the Trump administration reportedly froze about $790 million in early April, though officials said at the time they never received formal notification about why the funds were put on hold. Since then, Northwestern officials have said they are working to restore the grants—a process that apparently hasn’t gone smoothly.

    Northwestern University interim president Henry Bienen told The Daily Northwestern in an Oct. 17 interview that “a negotiation really requires two parties, at least, and at the present time, there’s not been anybody on the other end of the line.”

    As the freeze persists, Northwestern has said it will continue to support researchers’ “essential funding needs” at least through the end of the calendar year. Bienen told the student newspaper that supporting the research costs $30 million to $40 million a month.

    The university has laid off more than 400 employees and instituted other measures to cut costs, though officials said those moves were driven by more than just the funding freeze.

    Cornell University is also in talks with the administration to find a solution to the freeze. However, President Michael Kotlikoff recently shared new information about the impact of the freeze that calls into question the Trump administration’s figures.

    Trump officials told media outlets in April that they froze more than $1 billion at Cornell. But Kotlikoff said last week in his State of the University address that Cornell is actually facing about $250 million in canceled or unpaid research funds. (The university’s research expenditures totaled $1.6 billion in the 2023–24 academic year.)

    Like Northwestern and Princeton, Cornell hasn’t received a formal letter about the freeze, though media reports suggested that the administration froze the grants “because of concerns around antisemitism following pro-Palestinian activities on campus beginning in fall of 2023,” Kotlikoff said.

    Following news stories about the freeze, Kotlikoff said the university “started receiving stop-work orders ‘by direction of the White House’: halting research on everything from better tests for tick-borne diseases, to pediatric heart assist pumps, to ultrafast lasers for national defense, to AI optimization for blood transfusion delivery. At the same time, many other research grants, while not officially canceled, stopped being paid.” (About $74 million of the $250 million is in unpaid bills, he said.)

    Kotlikoff added that Cornell has been talking with the federal government for six months “to identify their concerns, provide evidence to address them, and return to a productive partnership.” In August, Bloomberg reported that the White House wanted to reach a $100 million settlement with Cornell.

    But Kotlikoff also criticized the administration for not using established legal processes to investigate potential civil rights violations, echoing a point experts have made for months.

    “I want to be clear that there are established procedures in place for the government to handle such concerns,” he said in his State of the University address. “Accusations of discrimination should be supported by, and adjudicated on the basis of, facts. This has not happened.”

    Kotlikoff, who was appointed president in March, made clear in his address to the Board of Trustees and university alumni that Cornell won’t agree to give up control of admissions or curricular decisions, among other things.

    “We will not agree to allow the government to dictate our institution’s policies, or how to enforce them,” he said. “And we will never abandon our commitment to be an institution where any person can find instruction in any study.”

    The administration has also said it froze about $108 million at Duke University, but neither Duke nor the National Institutes of Health responded to Inside Higher Ed’s request for an update.

    Source link

  • Denied Emerita, Reid Named “Honorary Alum” at New College

    Denied Emerita, Reid Named “Honorary Alum” at New College

    Thomas Simonetti/The Washington Post/Getty Images

    Amy Reid, a former professor of French at New College of Florida, was granted “honorary alumni” status by the New College Alumni Association Board of Directors in a unanimous vote nearly three weeks after she was denied emerita status by college president Richard Corcoran.

    “I was honored when my colleagues nominated me for emerita status and when the New College Alumni Association adopted me as one of their own, in recognition of my long teaching career and my vocal advocacy for the College, its academic program, and for the position of gender studies in the liberal arts,” Reid said in a statement to Inside Higher Ed. “New College students have made their mark because they are fiercely independent and courageous learners. I’ll try to live up to their standards. To the Novo community: Honor & Respect.”

    The honorary designation, rarely bestowed, gives Reid the same “rights and privileges” as other New College alumni, including access to alumni events, according to the alumni association’s motion. Reid retired in August after teaching at New College for more than 30 years and now serves as interim director of PEN America’s Freedom to Learn program.

    Reid was also the founder of the now-defunct gender studies program at New College, which the then–newly appointed conservative board eliminated in 2023. The college was mired in controversy again the following summer when officials tossed books from its former Gender and Diversity Center in the trash.

    Despite what alumni association governance committee chair Chris Van Dyk described as “overwhelming recommendation” for emerita status, including from New College provost David Rohrbacher and leaders in the Division of Humanities, Corcoran denied Reid the emerita title because of her outspoken faculty advocacy and criticism of conservative leadership at New College.

    “Although I recognize Professor Reid’s contributions to New College in teaching and scholarship, I cannot concur with the Division and Provost that she be honored with the title of emeritus,” Corcoran wrote in an email to Rohrbacher. “When I became president with a mandate for change from the Board of Trustees, there was need for reasoned and respectful exchange between the faculty and administration. Regrettably, Professor Reid was one of the leading voices of hyperbolic alarmism and needless obstruction. In her letter of resignation, Professor Reid wrote that ‘the New College where I once taught no longer exists.’ She need not be burdened by further association with it.”

    After the former faculty representative to the New College Board of Trustees quit in protest, Reid was elected to fill the role in 2023. She and student representative Grace Keenan were the only two board members to vote against Corcoran’s appointment as permanent president, Florida Politics reported.

    Emeritus status is largely symbolic, but it does usually come with some concrete perks, including the continued use of institutional email accounts, library and athletic facilities access, and sometimes free campus parking.

    Source link

  • Higher Ed Lobbying Drops in Third Quarter

    Higher Ed Lobbying Drops in Third Quarter

    Beleaguered by the Trump administration’s efforts to reshape higher education to align with conservative policy priorities, major universities continue to spend heavily on lobbying efforts to protect their interests.

    While lobbying expenses over all have boomed during 2025 compared to last year, spending fell in the third quarter, according to an Inside Higher Ed analysis of major research universities.

    Members of the Association of American Universities spent less in the third quarter of 2025 than in either of the first two quarters, racking up more than $8.6 million in lobbying costs, compared to $9 million in the first quarter and more than $10 million in Q2.

    AAU’s member institutions have already spent more than $27.8 million combined on lobbying this year.

    Top Spenders

    Among individual AAU members, Johns Hopkins University spent the most on lobbying in the third quarter, shelling out $390,000. JHU spent $170,000 in the first quarter and $380,000 in Q2, for a total of more than $940,000 so far this year.

    JHU’s lobbying disclosure form shows the private university in Baltimore engaged Congress on multiple issues, including the Trump administration’s One Big Beautiful Bill Act, student loans and psychedelic research.

    “We continue to advocate for our research mission through all appropriate channels,” a Johns Hopkins University spokesperson wrote in an emailed statement to Inside Higher Ed.

    Others that invested heavily in lobbying include Yale University, which spent $370,000 in the third quarter, and its Ivy league counterpart the University of Pennsylvania, which spent $360,000. The University of Washington was the top-spending public institution at $310,000, while Columbia University rounded out the top five with $290,000 in lobbying expenses for Q3.

    “Communicating the impact of Columbia’s researchers, scientists, scholars, and clinicians to policymakers in Washington, New York, and locally is vital, and we utilize a combination of in-house and outside professionals to ensure our message reaches key stakeholders, including our New York delegation,” a Columbia spokesperson wrote in an email to Inside Higher Ed.

    In addition to research funding and the One Big Beautiful Bill Act, common areas of focus noted in lobbying disclosure forms include appropriations, student visas and immigration, among other concerns that college officials have raised in private conversations with lawmakers on Capitol Hill.

    Including their third-quarter expenditures, several of the institutions above are among the top spenders for the year. Northwestern leads AAU members in lobbying expenses at $1.1 million, followed by the University of Washington at $1 million, JHU and Yale at $940,000, and Cornell at $914,000.

    Many universities dialed back lobbying expenses in the third quarter, some by significant amounts. Emory University, for example, spent $500,000 on lobbying in the second quarter but only $185,000 in Q3. Emory has spent $855,000 on lobbying in 2025.

    Though still among the top-spending AAU members, Cornell pulled back on lobbying, which fell to $240,000 in Q3 compared to $444,000 in the second quarter.

    Northwestern has cut spending in each successive quarter. The private university spent $607,000 on federal lobbying in Q1, the most of any university in any quarter this year. But that number fell to $306,000 in the second quarter and $230,000 more recently.

    Outliers

    Some universities outside the AAU also spent heavily on lobbying in the third quarter.

    The University of Phoenix, for example, spent $480,000 on federal lobbying efforts. Phoenix has spent consistently across all three quarters, totaling $1.4 million in lobbying expenditures in 2025. That appears to make the for-profit institution the top individual spender across the sector this year.

    Lobbying disclosure forms show Phoenix engaged on legislation, including the One Big Beautiful Bill Act and a bill related to student veteran benefits, but also on broad public policy issues.

    Phoenix officials declined to comment.

    Northeastern University is another top spender that falls outside of AAU membership. The university has spent $270,000 in each quarter, totaling $810,000 in 2024 lobbying expenditures.

    Source link

  • The Case Against AI Disclosure Statements (opinion)

    The Case Against AI Disclosure Statements (opinion)

    I used to require my students submit AI disclosure statements any time they used generative AI on an assignment. I won’t be doing that anymore.

    From the beginning of our current AI-saturated moment, I leaned into ChatGPT, not away, and was an early adopter of AI in my college composition classes. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they were using AI. I still fervently believe in those values, but I no longer believe that required disclosure statements help us achieve them.

    Look. I get it. Moving away from AI disclosure statements is antithetical to many of higher ed’s current best practices for responsible AI usage. But I started questioning the wisdom of the disclosure statement in spring 2024, when I noticed a problem. Students in my composition courses were turning in work that was obviously created with the assistance of AI, but they failed to proffer the required disclosure statements. I was puzzled and frustrated. I thought to myself, “I allow them to use AI; I encourage them to experiment with it; all I ask is that they tell me they’re using AI. So, why the silence?” Chatting with colleagues in my department who have similar AI-permissive attitudes and disclosure requirements, I found they were experiencing similar problems. Even when we were telling our students that AI usage was OK, students still didn’t want to fess up.

    Fess up. Confess. That’s the problem.

    Mandatory disclosure statements feel an awful lot like a confession or admission of guilt right now. And given the culture of suspicion and shame that dominates so much of the AI discourse in higher ed at the moment, I can’t blame students for being reluctant to disclose their usage. Even in a class with a professor who allows and encourages AI use, students can’t escape the broader messaging that AI use should be illicit and clandestine.

    AI disclosure statements have become a weird kind of performative confession: an apology performed for the professor, marking the honest students with a “scarlet AI,” while the less scrupulous students escape undetected (or maybe suspected, but not found guilty).

    As well intentioned as mandatory AI disclosure statements are, they have backfired on us. Instead of promoting transparency and honesty, they further stigmatize the exploration of ethical, responsible and creative AI usage and shift our pedagogy toward more surveillance and suspicion. I suggest that it is more productive to assume some level of AI usage as a matter of course, and, in response, adjust our methods of assessment and evaluation while simultaneously working toward normalizing the usage of AI tools in our own work.

    Studies show that AI disclosure carries risks both in and out of the classroom. One study published in May reports that any kind of disclosure (both voluntary and mandatory) in a wide variety of contexts resulted in decreased trust in the person using AI (this remained true even when study participants had prior knowledge of an individual’s AI usage, meaning, the authors write, “The observed effect can be attributed primarily to the act of disclosure rather than to the mere fact of AI usage.”)

    Another recent article points to the gap present between the values of honesty and equity when it comes to mandatory AI disclosure: People won’t feel safe to disclose AI usage if there’s an underlying or perceived lack of trust and respect.

    Some who hold unfavorable attitudes toward AI will point to these findings as proof that students should just avoid AI usage altogether. But that doesn’t strike me as realistic. Anti-AI bias will only drive student AI usage further underground and lead to fewer opportunities for honest dialogue. It also discourages the kind of AI literacy employers are starting to expect and require.

    Mandatory AI disclosure for students isn’t conducive to authentic reflection but is instead a kind of virtue signaling that chills the honest conversation we should want to have with our students. Coercion only breeds silence and secrecy.

    Mandatory AI disclosure also does nothing to curb or reduce the worst features of badly written AI papers, including the vague, robotic tone; the excess of filler language; and, their most egregious hallmark, the fabricated sources and quotes.

    Rather than demanding students confess their AI crimes to us through mandatory disclosure statements, I advocate both a shift in perspective and a shift of assignments. We need to move from viewing students’ AI assistance as a special exception warranting reactionary surveillance to accepting and normalizing AI usage as a now commonplace feature of our students’ education.

    That shift does not mean we should allow and accept any and all student AI usage. We shouldn’t resign ourselves to reading AI slop that a student generates in an attempt to avoid learning. When confronted with a badly written AI paper that sounds nothing like the student who submitted it, the focus shouldn’t be on whether the student used AI but on why it’s not good writing and why it fails to satisfy the assignment requirements. It should also go without saying that fake sources and quotes, regardless of whether they are of human or AI origin, should be called out as fabrications that won’t be tolerated.

    We have to build assignments and evaluation criteria that disincentivize the kinds of unskilled AI usage that circumvent learning. We have to teach students basic AI literacy and ethics. We have to build and foster learning environments that value transparency and honesty. But real transparency and honesty require safety and trust before they can flourish.

    We can start to build such a learning environment by working to normalize AI usage with our students. Some ideas that spring to mind include:

    • Telling students when and how you use AI in your own work, including both successes and failures in AI usage.
    • Offering clear explanations to students about how they could use AI productively at different points in your class and why they might not want to use AI at other points. (Danny Liu’s Menus model is an excellent example of this strategy.)
    • Adding an assignment such as an AI usage and reflection journal, which offers students a low-stakes opportunity to experiment with AI and reflect upon the experience.
    • Adding an opportunity for students to present to the class on at least one cool, weird or useful thing that they did with AI (maybe even encouraging them to share their AI failures, as well).

    The point with these examples is that we are inviting students into the messy, exciting and scary moment we all find ourselves in. They shift the focus away from coerced confessions to a welcoming invitation to join in and share their own wisdom, experience and expertise that they accumulate as we all adjust to the age of AI.

    Julie McCown is an associate professor of English at Southern Utah University. She is working on a book about how embracing AI disruption leads to more engaging and meaningful learning for students and faculty.

    Source link

  • Improving Community College Transfer in California

    Improving Community College Transfer in California

    California has established significant goals for postsecondary attainment, with the stated aim of having 70 percent of working-age adults hold a credential of value by 2035. To meet this goal, the state has invested time and resources into the community college system and upward transfer processes, seeking to create affordable and accessible pathways in and through higher education.

    A recently published report by the Public Policy Institute of California Higher Education Center found that a large share of community college students are applying to and enrolling in state universities to complete a bachelor’s degree, but equity gaps persist among certain demographic groups.

    The data highlights the importance of focusing on early benchmarks of academic progress—including credit completion rates, GPA and the stated goal of transfer—to help students succeed in making the transition to a four-year university. The report also underscores that some transfer students are willing to pay more and travel farther to attend a more selective institution.

    The background: California’s public higher education system is the largest and most diverse in the country, the report authors note. The California Community College system includes 116 institutions enrolling over 2.1 million students, and the California State University system consists of 22 institutions educating nearly half a million students. Within the state, the system is the top destination for upward transfer, with 58 percent of community college students going on to enroll at a CSU campus.

    Over the past decade, the two college systems have partnered to streamline transfer opportunities. One innovation is the associate degree for transfer (A.D.T.), a group of 40 academic pathways that guarantee admission to students who complete 60 credits toward a bachelor’s degree in a specific major. Another is the CSU Transfer Planner, which provides insights for students to navigate transferable credits, degree programs and campus requirements for transfer.

    The report looks at student demographic information, academic progress and participation in transfer pathways such as A.D.T. to identify success indicators in the transfer pipeline.

    Methodology

    Researchers analyzed data from the CSU Application and Admission Dashboard and longitudinal student-level data from fall 2018 and fall 2023.

    In the sample, 48 percent of transfer applicants were Latino, 26 percent white, 15 percent Asian and 4.5 percent Black. A majority were 24 years old or younger, and 75 percent received a California Promise Grant or a Pell Grant while in community college.

    The data: The average student spends nine semesters at a community college before applying to a CSU institution, researchers found.

    Students are required to complete 60 credits to transfer with junior-level standing, but the median student completed 71.5 credits. Only half of applicants had earned an A.D.T. before applying, and 22 percent earned a local associate degree, meaning about 30 percent of students applied for transfer without a credential.

    Researchers noted that students who made significant progress in their first year of community college were more likely to transfer. Those who successfully completed transfer-level math in their first year applied to CSU after seven terms on average, whereas student who didn’t applied after 10 community college terms.

    Students who were 25 or older, Black or financial aid recipients were less likely to meet early milestones and therefore less likely to transfer. Conversely, students with high GPAs were more likely to transfer.

    The data also indicated a gap between students eligible for admission at a CSU and those who actually applied. One in five students who completed an A.D.T. never applied to CSU despite having guaranteed admission. Of those, 43 percent enrolled at a different university, many in the University of California system.

    In total, 87 percent of A.D.T. recipients declared a transfer goal while at community college, but approximately 20 percent of them didn’t continue on to a bachelor’s degree program.

    A majority (92 percent) of all transfers were eventually admitted to at least one CSU, and 63 percent of all transfers enrolled. Three in 10 applied more than once, and almost half of them (47 percent) had their application denied the first time.

    “It is possible that these students were initially rejected from the campus of their choice (or to all campuses), took more community college classes, and then gained admission,” researchers wrote. On the flip side, a large share of those whose transfer applications were rejected applied only once (88 percent), and to only one campus (61 percent).

    Admissions data also revealed the importance of academic benchmarks early in the student’s community college career. Admission rates for students who took transfer-level math or English in their first year were higher compared to their peers who did not; similarly, students who earned 24 transferable credits were more likely to gain admission to a CSU. Unsurprisingly, students who stated a transfer goal, completed the A.D.T. or had a GPA of 3.25 or higher also had high admittance rates.

    One trend researchers noted is that students who were admitted to a CSU but chose to enroll at a different institution were more likely to select a college that was farther away or more expensive, indicating that cost and proximity are not deciding factors. Transfers also enrolled at more selective colleges compared to their peers who opted to enroll at CSU, though some students selected universities with lower graduation rates than CSU.

    Over all, transfer students had high graduation rates. Among the incoming fall 2020 cohort, 76 percent graduated with their bachelor’s degree in four years, and 69 percent completed it in three years. About 19 percent of students left the CSU system without graduating three years after enrolling, and these students were more likely to be Black, Latino, male or older or have financial need.

    Recommendations: Based on their findings, researchers identified three opportunities for improvement:

    1. Invest in the student’s first year. Interventions including dual enrollment, corequisite English and math courses, proactive advising, and flexible scheduling can promote early momentum and academic success for community college students.
    2. Collect additional data on enrollment decisions. While system data showed that some students opt out of a four-year degree program, researchers emphasized the need for student voices to understand why those admitted would not enroll at CSU. Researchers also noted a need for campus-specific data, “because there is high variation across individual CSUs in both acceptance and enrollment rates.”
    3. Create space at selective campuses and in high-demand majors. “Some of the students who were never admitted to CSU were competitive applicants, but they applied to the most in-demand campuses,” the authors wrote. To increase capacity for these students, researchers suggest flexible course scheduling options, co-locating campuses or expanding online degree programs.

    Source link

  • Director of Online Program Development at UVA

    Director of Online Program Development at UVA

    The origins of “Featured Gigs” trace back to the first post in the series with Kemi Jona, vice provost for online education and digital innovation at UVA. While I had the idea for the series, it was Kemi who ultimately came up with most of the language for the four questions we use to explore opportunities at the intersection of learning, technology and organizational change. Today, Kemi answers questions about the role of director of online program development.

    Q: What is the university’s mandate behind this role? How does it help align with and advance the university’s strategic priorities?

    A: The 2030 Plan calls on the university to expand the reach of its educational programs—both in person and online—and to make UVA more accessible, including to learners across and beyond the Commonwealth. The University of Virginia’s Office of the Vice Provost for Online Education and Digital Innovation is a key part of advancing this charge on behalf of the university, helping our schools and institutes design, deliver and scale high-quality online and hybrid programs that extend UVA’s reach and impact.

    The director of online program development plays a central role in advancing UVA’s online education goals. The role is ideal for someone who thrives at the intersection of strategy, innovation and execution. The director will not only guide program development but also help UVA build the internal capacity and frameworks needed to sustain this growth long-term. This is a high-impact, high-visibility position that will help shape the next chapter of online and hybrid learning at UVA and potentially serve as a model for the sector.

    Q: Where does the role sit within the university structure? How will the person in this role engage with other units and leaders across campus?

    A: This role sits within the provost’s office and reports directly to the vice provost for online education and digital innovation. The director will guide UVA schools and institutes through the planning, launch and evaluation of new online and hybrid programs, serving as a trusted partner to deans, associate deans, program directors and faculty.

    This individual will bring structure and strategy to UVA’s online growth, helping schools scope opportunities, assess market demand, support business case development and build the readiness needed for sustained success. The role requires exceptional communication, diplomacy and systems-level thinking to align multiple stakeholders around a shared vision.

    Q: What would success look like in one year? Three years? Beyond?

    A: In service of the vision articulated in the 2030 Plan and aligned to the strategic goals of our partner schools and institutes, UVA is undertaking ambitious growth in its online and hybrid portfolio. In the first year, success means ensuring active projects move from planning to launch with clarity and momentum, establishing shared frameworks, timelines and accountability across partners.

    Within three years, success will be measured not only in the number of successful program launches but also in the maturity of UVA’s internal systems, talent and decision-making processes that enable continued agility and innovation.

    Longer term, the director will help institutionalize a robust, repeatable, data-informed model for program development so UVA’s schools can innovate faster and with greater confidence, while ensuring that all programs uphold UVA’s reputation for academic excellence.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: Because this individual will be deeply engaged in all aspects of online program design, development and launch, he or she will gain substantial experience working with deans, faculty and other senior leaders. This experience would help set up future leadership roles in online education and digital innovation or in the private sector.

    This role offers a rare opportunity to operate at the heart of institutional transformation—building systems and partnerships that inform how UVA advances its mission as we begin our third century as a leading public institution. The experience will prepare the director for senior university leadership roles in strategy, academic innovation or digital transformation. It will equip them with the cross-sector perspective and executive acumen valued by both higher education and mission-driven organizations beyond academia.

    Please get in touch if you are conducting a job search at the intersection of learning, technology, and organizational change. If your gig is a good fit, featuring your gig on Featured Gigs is free.

    Source link

  • How Colleges Use Anti-Elitist and Elite-Adjacent Campaigns

    How Colleges Use Anti-Elitist and Elite-Adjacent Campaigns

    Wikipedia

    Two university campaigns hit the national spotlight in recent weeks. Each tells a very different story about how colleges market themselves.

    Colorado Mesa University’s new Featherstone University spoof takes aim at elite school stereotypes, ending with the line “We care about who you are, not who you know.”

    Days later, The Wall Street Journal profiled High Point University in a turnaround story built on private wealth and exclusivity. Its campus features etiquette lessons, manicured gardens and an airplane cabin for networking drills. HPU prepares students for a world where who you know still matters.

    In an industry criticized for sameness, both CMU and HPU stand out as strategic outliers.

    Trust, Value and the Split in Demand

    Public trust in higher education is fragile. Concerns over cost, access and free speech have left families asking if it is worth it. Against this backdrop, two playbooks are emerging: anti-elitist authenticity and elite-adjacent experience.

    Playbook A: CMU’s Skepticism as Fuel

    Colorado Mesa University’s “Welcome to Featherstone” flips elite-school marketing on its head. The parody ends with a challenge: “We don’t care about who you know. We care about you.”

    For a public university serving rural, first-generation, working-class students, the message fits. CMU has built its brand on affordability, access and trust by cutting tuition, growing CMU Tech and guaranteeing free tuition for Colorado families earning $70,000 or less.

    This isn’t simply mocking the elite; it’s segmentation. CMU speaks to families who see higher education as a bridge, not a birthright. In a sea of interchangeable ads, it uses satire to say, “We hear your skepticism—and we’re still here for you.”

    A Take From Rural America

    CMU’s approach hit a nerve, but it also hit a truth.

    I was born in East Detroit, then raised in Richmond, Mich., a farming town of 4,000. When my parents learned our local high school wasn’t accredited, they sent my brothers and me to school an hour away. At that time, only 32 percent of the local high school graduates pursued college. I still remember junior high classmates missing school to plant and harvest corn and soybeans.

    For rural communities like these, college can feel distant—financially and culturally. CMU’s campaign speaks to them with rare honesty.

    Playbook B: High Point’s Experience as Advantage

    If CMU sells authenticity, High Point sells aspiration. Its campus hums with classical music and fountains, lined with rocking chairs and gardens designed for conversation. Students dine in on-campus restaurants that double as lessons in professional etiquette, and housing options range from traditional dorms to $40,000 tiny homes.

    President Nido Qubein calls it preparation, not pampering: “Half of Wall Street sends their kids here.” The model caters to families who can pay full price and want an environment that mirrors the careers their children expect to enter.

    It’s not subtle, but it shows the university understands its target audience. In an uncertain marketing environment, HPU is selling a vision of success that feels polished, predictable and safe.

    What the Models Reveal

    CMU and HPU reveal opposite, equally intentional strategies. CMU doubled down on affordability with its 2024 CMU Promise Tour, which reached 22 rural and urban communities, boosting first-year enrollment by 25 percent. HPU, meanwhile, courts families buying access and advantage through concierge-level amenities.

    CMU uses satire to mock exclusivity; HPU leans into luxury to promise it. Both know exactly whom they’re speaking to.

    Leadership Takeaways

    In a landscape of sameness and skepticism, higher ed leaders should ask, “What do we stand for—and how do we prove it?”

    Is it belonging and mobility like CMU, or exclusivity and polish like HPU? Either can work if it’s backed by programs, outcomes and transparency. Whatever your promise, ensure the experience delivers it.

    Both institutions have likely alienated some audiences, but they’ve connected deeply with their own. That’s the point of strategic marketing. Their playbooks, while different, seem to be working for Colorado Mesa and High Point, which both had record enrollments in fall 2025 amid national headlines warning of a demographic cliff.

    Beyond the Marketing

    Beyond the spotlight, both universities must prove results. Time and measurement will tell if they are delivering on access and affordability, or on postgraduate success and networks.

    Authenticity carries risk, as organizational psychologist Adam Grant recently noted in a New York Times op-ed, but when outcomes match promises, both models can be legitimate. Hide results or exaggerate benefits and either fails the test of ethics and equity.

    In a nation this diverse, there is no single market for higher ed—there are many markets. And in a landscape this stratified, the unforgivable sin isn’t satire or spectacle; it’s sameness without substance.

    Maria Kuntz is director of content marketing strategy and communications at the University of Colorado–Boulder. She leads content strategy for advancement, oversees the award-winning Coloradan alumni magazine and writes about storytelling, leadership and trust in higher education.

    Source link

  • AAUP President Exacerbated “Organizational Antisemitism”

    AAUP President Exacerbated “Organizational Antisemitism”

    U.S. Senate Committee on Health, Education Labor and Pensions

    In a letter to American Federation of Teachers president Randi Weingarten, Sen. Bill Cassidy, the Louisiana Republican who chairs the education committee, accused American Association of University Professors president and AFT vice president Todd Wolfson of promoting “organizational antisemitism” within the AAUP. 

    Cassidy cited an August Inside Higher Ed interview with Wolfson in which the union leader stood against sending weapons to Israel, accused the Trump administration of weaponizing antisemitism for political gains and advocated for the Jerusalem Declaration on Antisemitism, a definition of antisemitism that does not include anti-Zionism.

    Cassidy also referenced a statement from Wolfson calling Vice President JD Vance a fascist as well as a March letter to the AAUP from the Anti-Defamation League and Academic Engagement Network that said “the AAUP [is] being perceived as increasingly moving in a virulently anti-Israel direction, and as a result, growing insensitive and even hostile to the concerns of its Jewish and Zionist members.”

    “In the six months since he received this warning from one of the nation’s leading organizations dedicated to fighting antisemitism [ADL], Dr. Wolfson has not only failed to address these concerns but has exacerbated them,” Cassidy wrote. “Jewish faculty members deserve to carry out their work free from discrimination. As an association with a national presence, it is concerning that AFT has not only failed to help solve this problem but has made it worse by allowing Dr. Wolfson to continue to serve in a leadership role.”

    The AAUP is an affiliate of the AFT, one of the largest unions nationwide for K–12 and higher education professionals. The two became formally affiliated in 2022 and share some leadership, including Wolfson.

    Wolfson replied to Cassidy’s letter in a statement to Inside Higher Ed Monday.

    “It appears Senator Cassidy and his GOP colleagues are furious that seven universities have rejected Trump’s absurd Higher Ed Loyalty Oath. Rather than reckon with their failed attempt to strong-arm higher education, they’ve chosen to complain to our national affiliate, AFT, because AAUP dared to hold a webinar,” Wolfson wrote, referring to an AAUP webinar called “Scholasticide in Palestine” that Cassidy referenced in the letter. “I would respectfully suggest they spend less time trying to undermine my constitutional rights and more time focusing on what Americans actually care about—like reopening the government, lowering healthcare costs, and addressing the cost-of-living crisis.”

    Cassidy wants Weingarten to tell him by Nov. 6 how AFT is addressing the concerns raised by the ADL and to share more details about how she’s working with the AAUP to ensure Jewish members aren’t experiencing antisemitism. He also asked Weingarten whether AFT publicly condemns Wolfson’s remarks.

    Source link