Tag: News

  • Higher Ed Institutions Raise Concerns About H-1B Visa Fee

    Higher Ed Institutions Raise Concerns About H-1B Visa Fee

    Jabin Botsford/The Washington Post/Getty Images

    A number of higher education institutions and the associations that represent them are asking to be exempted from the new $100,000 H-1B visa application fee, saying the prohibitive cost could be detrimental to the recruitment and retention of international faculty, researchers and staff members.

    In a letter to the Department of Homeland Security last week, the American Council on Education argued that such individuals “contribute to groundbreaking research, provide medical services to underserved and vulnerable populations … and enable language study, all of which are vital to U.S. national interests.” Without them, ACE and 31 co-signers said, key jobs in high-demand sectors such as health care, information technology, education and finance will likely go unfilled. 

    The letter came just days after U.S. Citizenship and Immigration Services launched a new online payment website and provided an updated statement on policies surrounding the fee. UCIS clarified that the fee will apply to any new H-1B petitions filed on or after Sept. 21, and it must be paid before the petition is filed.

    The update also referenced possible “exception[s] from the fee” but said those exceptions would only be granted in an “extraordinarily rare circumstance where the Secretary has determined that a particular alien worker’s presence in the United States as an H-1B worker is in the national interest.”

    ACE said that H-1B visa recipients in higher education certainly meet those standards, citing data from the College and University Professional Association for Human Resources that shows that over 70 percent of international employees at colleges and universities hold tenure-track or tenured positions. The top five disciplines they work in are business, engineering, health professions, computer science and physical
    sciences.

    “H-1B visa holders working for institutions of higher education are doing work that is crucial to the U.S. economy and national security,” the letter reads.

    Despite the clarification provided by UCIS, ACE still had several remaining questions about the fee. These included whether the $100,000 would be refunded if a petition was denied and whether individuals seeking a “change of status” from an H-1B to an F-1 or J-1 would still be required to pay the fee.

    At least two lawsuits have been filed against DHS concerning these visa fees. Neither has been issued a ruling so far.

    Source link

  • Teaching Alongside Generative AI for Student Success

    Teaching Alongside Generative AI for Student Success

    A growing share of colleges and universities are embedding artificial intelligence tools and AI literacy into the curriculum with the intent of aiding student success. A 2025 Inside Higher Ed survey of college provosts found that nearly 30 percent of respondents have reviewed curriculum to ensure that it will prepare students for AI in the workplace, and an additional 63 percent say they have plans to review curriculum for this purpose.

    Touro University in New York is one institution that’s incentivizing faculty to engage with AI tools, including embedding simulations into academic programs.

    In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with Shlomo Argamon, associate provost for artificial intelligence at Touro, to discuss the university policy for AI in the classroom, the need for faculty and staff development around AI, and the risks of gamification of education.

    An edited version of the podcast appears below.

    Q: How are you all at Touro thinking about AI? Where is AI integrated into your campus?

    Shlomo Argamon, associate provost for artificial intelligence at Touro University

    A: When we talk about the campus of Touro, we actually have 18 or 19 different campuses around the country and a couple even internationally. So we’re a very large and very diverse organization, which does affect how we think about AI and how we think about issues of the governance and development of our programs.

    That said, we think about AI primarily as a new kind of interactive technology, which is best seen as assistive to human endeavors. We want to teach our students both how to use AI effectively in what they do, how to understand and properly mitigate and deal with the risks of using AI improperly, but above all, to always think about AI in a human context.

    When we think about integrating AI for projects, initiatives, organizations, what have you, we need to first think about the human processes that are going to be supported by AI and then how AI can best support those processes while mitigating the inevitable risks. That’s really our guiding philosophy, and that’s true in all the ways we’re teaching students about AI, whether we’re teaching students specifically, deeply technical [subjects], preparing them for AI-centric careers or preparing them to use AI in whatever other careers they may pursue.

    Q: When it comes to teaching about AI, what is the commitment you all make to students? Is it something you see as a competency that all students need to gain or something that is decided by the faculty?

    A: We are implementing a combination—a top-down and a bottom-up approach.

    One thing that is very clear is that every discipline, and in fact, every course and faculty member, will have different needs and different constraints, as well as competencies around AI that are relevant to that particular field, to that particular topic. We also believe there’s nobody that knows the right way to teach about AI, or to implement AI, or to develop AI competencies in your students.

    We need to encourage and incentivize all our faculty to be as creative as possible in thinking about the right ways to teach their students about AI, how to use it, how not to use it, etc.

    So No. 1 is, we’re encouraging all of our faculty at all levels to be thinking and developing their own ideas about how to do this. That said, we also believe very firmly that all students, all of our graduates, need to have certain fundamental competencies in the area of AI. And the way that we’re doing this is by integrating AI throughout our general education curriculum for undergraduates.

    Ultimately, we believe that most, if not all, of our general education courses will include some sort of module about AI, teaching students specifically about the AI-relevant competencies that are relevant to those particular topics that they’re learning, whether it’s writing, reading skills, presentations, math, science, history, the different kinds of cognition and skills that you learn in different fields. What are the AI competencies that are relevant to that, and to have them learning that.

    So No. 1, they’re learning it not all at once. And also, very importantly, it’s not isolated from the topics, from the disciplines that they’re learning, but it’s integrated within them so that they see it as … part of writing is knowing how to use AI in writing and also knowing how not to. Part of learning history is knowing how to use AI for historical research and reasoning and knowing how not to use it, etc. So we’re integrating that within our general education curriculum.

    Beyond that, we also have specific courses in various AI skills, both at the undergraduate [and] at the graduate level, many of which are designed for nontechnical students to help them learn the skills that they need.

    Q: Because Touro is such a large university and it’s got graduate programs, online programs, undergraduate programs, I was really surprised that there is an institutional AI policy.

    A lot of colleges and universities have really grappled with, how do we institutionalize our approach to AI? And some leaders have kind of opted out of the conversation and said, “We’re going to leave it to the faculty.” I wonder if we could talk about the AI policy development and what role you played in that process, and how that’s the overarching, guiding vision when it comes to thinking about students using and engaging with AI?

    A: That’s a question that we have struggled with, as all academic leaders, as you mentioned, struggle with this very question.

    Our approach is to create policy at the institutional level that provides only the necessary guardrails and guidance that then enables each of our schools, departments and individual faculty members to implement the correct solutions for them in their particular areas, within this guidance and these guardrails so that it’s done safely and so that we know that it’s going, over all, in a positive and also institutionally consistent direction to some extent.

    In addition, one of the main functions of my office is to provide support to the schools, departments and especially the faculty members to make this transition and to develop what they need.

    It’s an enormous burden on faculty members to shift, not just to add AI content to their classes, if they do so, but to shift the way that we teach, the way that we do assessments. The way that we relate to our students, even, has to shift, to change, and it creates a burden on them.

    It’s a process to develop resources, to develop ways of doing this. I and the people that work in our office, we have regular office hours to talk to faculty, to work with them. One of the most important things that we do, and we spend a lot of time and effort on this, is training for our faculty, for our staff on AI, on using AI, on teaching about AI, on the risks of AI, on mitigating those risks, how to think about AI—all of these things. It all comes down to making sure that our faculty and staff, they are the university, and they’re the ones who are going to make all of this a success, and it’s up to us to give them the tools that they need to make this a success.

    I would say that while in many questions, there are no right or wrong answers, there are different perspectives and different opinions. I think that there is one right answer to “What does a university need to do institutionally to ensure success at dealing with the challenge of AI?” It’s to support and train the faculty and staff, who are the ones who are going to make whatever the university does a success or a failure.

    Q: Speaking of faculty, there was a university faculty innovation grant program that sponsored faculty to take on projects using AI in the classroom. Can you talk a little bit about that and how that’s been working on campus?

    A: We have an external donor who donated funds so that we were able to award nearly 100 faculty innovation challenge grants for developing methods of integrating AI into teaching.

    Faculty members applied and did development work over the summer, and they’re now implementing in their fall courses right now. We’re right now going through the initial set of faculty reports on their projects, and we have projects from all over the university in all different disciplines and many different approaches to looking at how to use AI.

    At the beginning of next spring, we’re going to have a conference workshop to bring everybody together so we can share all of the different ways that people try to do this. Some experiments, I’m sure, will not have worked, but that’s also incredibly important information, because what we’re seeking to do [is], we’re seeking to help our students, but we’re also seeking to learn what works, what doesn’t work and how to move forward.

    Again, this goes back to our philosophy that we want to unleash the expertise, intelligence, creativity of our faculty—not top down to say, “We have an AI initiatives. This is what you need to be doing”—but, instead, “Here’s something new. We’ll give you the tools, we’ll give you the support. We’ll give you the funding to make something happen, make interesting things happen, make good things for your students happen, and then let’s talk about it and see how it worked, and keep learning and keep growing.”

    Q: I was looking at the list of faculty innovation grants, and I saw that there were a few other simulations. There was one for educators helping with classroom simulations. There was one with patient interactions for medical training. It seems like there’s a lot of different AI simulations happening in different courses. I wonder if we can talk about the use of AI for experiential learning and why that’s such a benefit to students.

    A: Ever since there’s been education, there’s been this kind of distinction between book learning and real-world learning, experiential learning and so forth. There have always been those who have questioned the value of a college education because you’re just learning what’s in the books and you don’t really know how things really work, and that criticism has some validity.

    But what we’re trying to do and what AI allows us to do [is], it allows us and our students to have more and more varied experiences of the kinds of things they’re trying to learn and to practice what they’re doing, and then to get feedback on a much broader level than we could do before. Certainly, whenever you had a course in say, public speaking, students would get up, do some public speaking, get feedback and proceed. Now with AI, students can practice in their dorm rooms over and over and over again and get direct feedback; that feedback and those experiences can be made available then to the faculty member, who can then give the students more direct and more human or concentrated or expert feedback on their performance based on this, and it just scales.

    In the medical field, this is where it’s hugely, hugely important. There’s a long-standing institution in medical education called the standardized patient. Traditionally it’s a human actor who learns to act as a patient, and they’re given the profile of what disorders they’re supposed to have and how they’re supposed to act, and then students can practice, whether they’re diagnostic skills, whether they’re questions of student care and bedside manner, and then get expert feedback.

    We now have, to a large extent, AI systems that can do this, whether it’s interactive in a text-based simulation, voice-based simulation. We also have robotic mannequins that the students can work with that are AI-powered with AI doing conversation. Then they can be doing physical exams on the mannequins that are simulating different kinds of conditions, and again, this gives the possibility of really just scaling up this kind of experiential learning. Another kind of AI that has been found useful in a number of our programs, particularly in our business program, are AI systems that watch people give presentations and can give you real-time feedback, and that works quite well.

    Q: These are interesting initiatives, because it cuts out the middleman of needing a third party or maybe a peer to help the student practice the experience. But in some ways, does it gamify it too much? Is it too much like video games for students? How have you found that these are realistic enough to prepare students?

    A: That is indeed a risk, and one that we need to watch. As in nearly everything that we’re doing, there are risks that need to be managed and cannot be solved. We need to be constantly alert and watching for these risks and ensuring that we don’t overstep one boundary or another.

    When you talk about the gamification, or the video game nature of this, the artificial nature of it, there are really two pieces to it. One piece is the fact that there is no mannequin that exists, at least today, that can really simulate what it’s like to examine a human being and how the human being might react.

    AI chatbots, as good as they are, will not now and in the near, foreseeable future, at least, be able to simulate human interactions quite accurately. So there’s always going to be a gap. What we need to do, as with other kinds of education, you read a book, the book is not going to be perfect. Your understanding of the book is not going to be perfect. There has to be an iterative process of learning. We have to have more realistic simulations, different kinds of simulations, so the students can, in a sense, mentally triangulate their different experiences to learn to do things better. That’s one piece of it.

    The other piece, when you say gamification, there’s the risk that it turns into “I’m trying to do something to stimulate getting the reward or the response here or there.” And there’s a small but, I think, growing research literature on gamification of education, where if you gamify a little bit too much, it becomes more like a slot machine, and you’re learning to maneuver the machine to give you the dopamine hits or whatever, rather than really learning the content of what you’re doing. The only solution to that is for us to always be aware of what we’re doing and how it’s affecting our students and to adjust what we’re doing to avoid this risk.

    This goes back to one of the key points: Our whole philosophy of this is to always look at the technology and the tools, whether AI or anything else, as embedded within a larger human context. The key here is understanding when we implement some educational experience for students, whether it involves AI or technology or not, it’s always creating incentives for the students to behave in a certain way. What are those incentives, and are those incentives aligned with the educational objectives that we have for the students? That’s the question that we always need to be asking ourselves and also observing, because with AI, we don’t entirely know what those incentives are until we see what happens. So we’re constantly learning and trying to figure this out as we go.

    If I could just comment on that peer-to-peer simulation: Medical students poking each other or social work students interviewing each other for a social work kind of exam has another important learning component, because the student that is being operated upon is learning what it’s like to be in the other shoes, what it’s like to be the patient, what it’s like to be the object of investigation by the professional. And empathy is an incredibly important thing, and understanding what it’s like for them helps the students to learn, if done properly, to do it better and to have the appropriate sort of relationship with their patients.

    Q: You also mentioned these simulations give the faculty insight into how the student is performing. I wonder if we can talk about that; how is that real-time feedback helpful, not only for the student but for the professor?

    A: Now, one thing that needs to be said is that it’s very difficult, often, to understand where all of your students are in the learning process, what specifically they need. We can be deluged by data, if we so choose, that may confuse more than enlighten.

    That said, the data that come out of these systems can definitely be quite useful. One example is there are some writing assistance programs, Grammarly and their ilk, that can provide the exact provenance of writing assignments to the faculty, so it can show the faculty exactly how something was composed. Which parts did they write first? Which parts did they write second? Maybe they outlined it, then they revised this and they changed this, and then they cut and pasted it from somewhere else and then edited.

    All of those kinds of things that gives the faculty member much more detailed information about the student’s process, which can enable the faculty to give the students much more precise and useful feedback on their own learning. What do they perhaps need to be doing differently? What are they doing well? And so forth. Because then you’re not just looking at a final paper or even at a couple of drafts and trying to infer what the student was doing so that you can give them feedback, but you can actually see that more or less in real time.

    That’s the sort of thing where the data can be very useful. And again, I apologize if I sound like a broken record. It all goes back to the human aspect of this, and to use data that helps the faculty member to see the individual student with their own individual ways of thinking, ways of behaving, ways of incorporating knowledge, to be able to relate to them more as an individual.

    Briefly and parenthetically, one of the great hopes that we have for integrating AI into the educational process is that AI can help to take away many of the bureaucratic and other burdens that faculty are burdened with, and free them and enable them in different ways to enhance their human relationship with their students, so that we can get back to the core of education. Which really, I believe, is the transfer of knowledge and understanding through a human relationship between teacher and student.

    It’s not what might be termed the “jug metaphor” for education, where I, the faculty member, have a jug full of knowledge, and I’m going to pour it into your brain, but rather, I’m going to develop a relationship with you, and through this relationship, you are going to be transformed, in some sense.

    Q: This could be a whole other podcast topic, but I want to touch on this briefly. There is a risk sometimes when students are using AI-powered tools and faculty are using AI-powered tools that it is the AI engaging with itself and not necessarily the faculty with the students. When you talk about allowing AI to lift administrative burdens or ensure that faculty can connect with students, how can we make sure that it’s not robot to robot but really person to person?

    A: That’s a huge and a very important topic, and one which I wish that I had a straightforward and direct and simple answer for. This is one of those risks that has to be mitigated and managed actively and continually.

    One of the things that we emphasize in all our trainings for faculty and staff and all our educational modules for students about AI is the importance of the AI assisting you, rather than you assisting the AI. If the AI produces some content for you, it has to be within a process in which you’re not just reviewing it for correctness, but you’re producing the content where it’s helping you to do so in some sense.

    That’s a little bit vague, because it plays out differently in different situations, and that’s the case for faculty members who are producing a syllabus or using AI to produce other content for the courses to make sure that it’s content that they are producing with AI. Same thing for the students using AI.

    For example, our institutional AI policy having to do with academic honesty and integrity, is, I believe, groundbreaking in the sense that our default policy for courses that don’t have a specific policy regarding the use of AI in that course—by next spring, all courses must have a specific policy—is that AI is allowed to be used by students for a very wide variety of tasks on their assignments.

    You can’t use AI to simply do your assignment for you. That is forbidden. The key is the work has to be the work of the student, but AI can be used to assist. Through establishing this as a default policy—which faculty, department chairs, deans have wide latitude to define more or less restrictive policies with specific carve-outs, simply because every field is different and the needs are different—the default and the basic attitude is, AI is a tool. You need to learn to use it well and responsibly, whatever you do.

    Q: I wanted to talk about the future of AI at the university. Are there any new initiatives you should tell our listeners about? How are you all thinking about continuing to develop AI as a teaching and learning tool?

    A: It’s hard for me to talk about specific initiatives, because what we’re doing is we believe that it’s AI within higher education particularly, but I think in general as well, it’s fundamentally a start-up economy in the sense that nobody, and I mean nobody, knows what to do with it, how to deal with it, how does it work? How does it not work?

    Therefore, our attitude is that we want to have it run as many experiments as we can, to try as many different things as we can, different ways of teaching students, different ways of using AI to teach. Whether it’s through simulations, content creation, some sort of AI teaching assistants working with faculty members, whether it’s faculty members coming up with very creative assignments for students that enable them to learn the subject matter more deeply by AI assisting them to do very difficult tasks, perhaps, or tasks that require great creativity, or something like that.

    The sky is the limit, and we want all of our faculty to experiment and develop. We’re seeking to create that within the institution. Touro is a wonderful institution for that, because we already have the basic institutional culture for this, to have an entrepreneurial culture within the university. So the university as a whole is an entrepreneurial ecosystem for experimenting and developing ways of teaching about and with and through AI.

    Source link

  • Writing Classes Are About Writing, Not AI-Aided Production

    Writing Classes Are About Writing, Not AI-Aided Production

    I had more important things to do.

    The assignment was dumb and seemed pointless.

    I don’t care about this class.

    I had too much stuff to do and it was just easier to check something off the list.

    I had to work.

    I didn’t understand the assignment.

    Everyone else is using it and they’re doing fine.

    I was pretty sure [the LLM] would do a better job than me.

    Source link

  • Northwestern, Cornell Still Working to Unfreeze Federal Funds

    Northwestern, Cornell Still Working to Unfreeze Federal Funds

    Photo illustration by Justin Morrison/Inside Higher Ed | arlutz73 and Wolterk/iStock/Getty Images

    Thanks to a series of settlements and court orders, some universities that had their grants frozen by the Trump administration earlier this year have seen that funding restored.

    But others are still trying to unfreeze the grants and learn more about why they were suspended in the first place.

    Since March, the Trump administration has said that it put nearly $6 billion on hold at nine universities. Three universities—Columbia, Penn and Brown—cut deals with the administration to restore the funding, while the University of California, Los Angeles, and Harvard got the money back via court orders. The fate of the remaining four freezes—at Duke, Cornell, Northwestern and Princeton Universities—remains uncertain.

    Princeton has seen about half of its frozen grants restored, President Christopher Eisgruber told the alumni magazine in late August. Roughly $200 million was put on hold initially.

    Eisgruber said Princeton never learned why the funds were frozen, beyond media reports that connected it to concerns over antisemitism on campus. A Princeton spokesperson confirmed the magazine’s report but declined to share more details about the status of the remaining grants.

    At Northwestern, the Trump administration reportedly froze about $790 million in early April, though officials said at the time they never received formal notification about why the funds were put on hold. Since then, Northwestern officials have said they are working to restore the grants—a process that apparently hasn’t gone smoothly.

    Northwestern University interim president Henry Bienen told The Daily Northwestern in an Oct. 17 interview that “a negotiation really requires two parties, at least, and at the present time, there’s not been anybody on the other end of the line.”

    As the freeze persists, Northwestern has said it will continue to support researchers’ “essential funding needs” at least through the end of the calendar year. Bienen told the student newspaper that supporting the research costs $30 million to $40 million a month.

    The university has laid off more than 400 employees and instituted other measures to cut costs, though officials said those moves were driven by more than just the funding freeze.

    Cornell University is also in talks with the administration to find a solution to the freeze. However, President Michael Kotlikoff recently shared new information about the impact of the freeze that calls into question the Trump administration’s figures.

    Trump officials told media outlets in April that they froze more than $1 billion at Cornell. But Kotlikoff said last week in his State of the University address that Cornell is actually facing about $250 million in canceled or unpaid research funds. (The university’s research expenditures totaled $1.6 billion in the 2023–24 academic year.)

    Like Northwestern and Princeton, Cornell hasn’t received a formal letter about the freeze, though media reports suggested that the administration froze the grants “because of concerns around antisemitism following pro-Palestinian activities on campus beginning in fall of 2023,” Kotlikoff said.

    Following news stories about the freeze, Kotlikoff said the university “started receiving stop-work orders ‘by direction of the White House’: halting research on everything from better tests for tick-borne diseases, to pediatric heart assist pumps, to ultrafast lasers for national defense, to AI optimization for blood transfusion delivery. At the same time, many other research grants, while not officially canceled, stopped being paid.” (About $74 million of the $250 million is in unpaid bills, he said.)

    Kotlikoff added that Cornell has been talking with the federal government for six months “to identify their concerns, provide evidence to address them, and return to a productive partnership.” In August, Bloomberg reported that the White House wanted to reach a $100 million settlement with Cornell.

    But Kotlikoff also criticized the administration for not using established legal processes to investigate potential civil rights violations, echoing a point experts have made for months.

    “I want to be clear that there are established procedures in place for the government to handle such concerns,” he said in his State of the University address. “Accusations of discrimination should be supported by, and adjudicated on the basis of, facts. This has not happened.”

    Kotlikoff, who was appointed president in March, made clear in his address to the Board of Trustees and university alumni that Cornell won’t agree to give up control of admissions or curricular decisions, among other things.

    “We will not agree to allow the government to dictate our institution’s policies, or how to enforce them,” he said. “And we will never abandon our commitment to be an institution where any person can find instruction in any study.”

    The administration has also said it froze about $108 million at Duke University, but neither Duke nor the National Institutes of Health responded to Inside Higher Ed’s request for an update.

    Source link

  • Denied Emerita, Reid Named “Honorary Alum” at New College

    Denied Emerita, Reid Named “Honorary Alum” at New College

    Thomas Simonetti/The Washington Post/Getty Images

    Amy Reid, a former professor of French at New College of Florida, was granted “honorary alumni” status by the New College Alumni Association Board of Directors in a unanimous vote nearly three weeks after she was denied emerita status by college president Richard Corcoran.

    “I was honored when my colleagues nominated me for emerita status and when the New College Alumni Association adopted me as one of their own, in recognition of my long teaching career and my vocal advocacy for the College, its academic program, and for the position of gender studies in the liberal arts,” Reid said in a statement to Inside Higher Ed. “New College students have made their mark because they are fiercely independent and courageous learners. I’ll try to live up to their standards. To the Novo community: Honor & Respect.”

    The honorary designation, rarely bestowed, gives Reid the same “rights and privileges” as other New College alumni, including access to alumni events, according to the alumni association’s motion. Reid retired in August after teaching at New College for more than 30 years and now serves as interim director of PEN America’s Freedom to Learn program.

    Reid was also the founder of the now-defunct gender studies program at New College, which the then–newly appointed conservative board eliminated in 2023. The college was mired in controversy again the following summer when officials tossed books from its former Gender and Diversity Center in the trash.

    Despite what alumni association governance committee chair Chris Van Dyk described as “overwhelming recommendation” for emerita status, including from New College provost David Rohrbacher and leaders in the Division of Humanities, Corcoran denied Reid the emerita title because of her outspoken faculty advocacy and criticism of conservative leadership at New College.

    “Although I recognize Professor Reid’s contributions to New College in teaching and scholarship, I cannot concur with the Division and Provost that she be honored with the title of emeritus,” Corcoran wrote in an email to Rohrbacher. “When I became president with a mandate for change from the Board of Trustees, there was need for reasoned and respectful exchange between the faculty and administration. Regrettably, Professor Reid was one of the leading voices of hyperbolic alarmism and needless obstruction. In her letter of resignation, Professor Reid wrote that ‘the New College where I once taught no longer exists.’ She need not be burdened by further association with it.”

    After the former faculty representative to the New College Board of Trustees quit in protest, Reid was elected to fill the role in 2023. She and student representative Grace Keenan were the only two board members to vote against Corcoran’s appointment as permanent president, Florida Politics reported.

    Emeritus status is largely symbolic, but it does usually come with some concrete perks, including the continued use of institutional email accounts, library and athletic facilities access, and sometimes free campus parking.

    Source link

  • Higher Ed Lobbying Drops in Third Quarter

    Higher Ed Lobbying Drops in Third Quarter

    Beleaguered by the Trump administration’s efforts to reshape higher education to align with conservative policy priorities, major universities continue to spend heavily on lobbying efforts to protect their interests.

    While lobbying expenses over all have boomed during 2025 compared to last year, spending fell in the third quarter, according to an Inside Higher Ed analysis of major research universities.

    Members of the Association of American Universities spent less in the third quarter of 2025 than in either of the first two quarters, racking up more than $8.6 million in lobbying costs, compared to $9 million in the first quarter and more than $10 million in Q2.

    AAU’s member institutions have already spent more than $27.8 million combined on lobbying this year.

    Top Spenders

    Among individual AAU members, Johns Hopkins University spent the most on lobbying in the third quarter, shelling out $390,000. JHU spent $170,000 in the first quarter and $380,000 in Q2, for a total of more than $940,000 so far this year.

    JHU’s lobbying disclosure form shows the private university in Baltimore engaged Congress on multiple issues, including the Trump administration’s One Big Beautiful Bill Act, student loans and psychedelic research.

    “We continue to advocate for our research mission through all appropriate channels,” a Johns Hopkins University spokesperson wrote in an emailed statement to Inside Higher Ed.

    Others that invested heavily in lobbying include Yale University, which spent $370,000 in the third quarter, and its Ivy league counterpart the University of Pennsylvania, which spent $360,000. The University of Washington was the top-spending public institution at $310,000, while Columbia University rounded out the top five with $290,000 in lobbying expenses for Q3.

    “Communicating the impact of Columbia’s researchers, scientists, scholars, and clinicians to policymakers in Washington, New York, and locally is vital, and we utilize a combination of in-house and outside professionals to ensure our message reaches key stakeholders, including our New York delegation,” a Columbia spokesperson wrote in an email to Inside Higher Ed.

    In addition to research funding and the One Big Beautiful Bill Act, common areas of focus noted in lobbying disclosure forms include appropriations, student visas and immigration, among other concerns that college officials have raised in private conversations with lawmakers on Capitol Hill.

    Including their third-quarter expenditures, several of the institutions above are among the top spenders for the year. Northwestern leads AAU members in lobbying expenses at $1.1 million, followed by the University of Washington at $1 million, JHU and Yale at $940,000, and Cornell at $914,000.

    Many universities dialed back lobbying expenses in the third quarter, some by significant amounts. Emory University, for example, spent $500,000 on lobbying in the second quarter but only $185,000 in Q3. Emory has spent $855,000 on lobbying in 2025.

    Though still among the top-spending AAU members, Cornell pulled back on lobbying, which fell to $240,000 in Q3 compared to $444,000 in the second quarter.

    Northwestern has cut spending in each successive quarter. The private university spent $607,000 on federal lobbying in Q1, the most of any university in any quarter this year. But that number fell to $306,000 in the second quarter and $230,000 more recently.

    Outliers

    Some universities outside the AAU also spent heavily on lobbying in the third quarter.

    The University of Phoenix, for example, spent $480,000 on federal lobbying efforts. Phoenix has spent consistently across all three quarters, totaling $1.4 million in lobbying expenditures in 2025. That appears to make the for-profit institution the top individual spender across the sector this year.

    Lobbying disclosure forms show Phoenix engaged on legislation, including the One Big Beautiful Bill Act and a bill related to student veteran benefits, but also on broad public policy issues.

    Phoenix officials declined to comment.

    Northeastern University is another top spender that falls outside of AAU membership. The university has spent $270,000 in each quarter, totaling $810,000 in 2024 lobbying expenditures.

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Director of Online Program Development at UVA

    Director of Online Program Development at UVA

    The origins of “Featured Gigs” trace back to the first post in the series with Kemi Jona, vice provost for online education and digital innovation at UVA. While I had the idea for the series, it was Kemi who ultimately came up with most of the language for the four questions we use to explore opportunities at the intersection of learning, technology and organizational change. Today, Kemi answers questions about the role of director of online program development.

    Q: What is the university’s mandate behind this role? How does it help align with and advance the university’s strategic priorities?

    A: The 2030 Plan calls on the university to expand the reach of its educational programs—both in person and online—and to make UVA more accessible, including to learners across and beyond the Commonwealth. The University of Virginia’s Office of the Vice Provost for Online Education and Digital Innovation is a key part of advancing this charge on behalf of the university, helping our schools and institutes design, deliver and scale high-quality online and hybrid programs that extend UVA’s reach and impact.

    The director of online program development plays a central role in advancing UVA’s online education goals. The role is ideal for someone who thrives at the intersection of strategy, innovation and execution. The director will not only guide program development but also help UVA build the internal capacity and frameworks needed to sustain this growth long-term. This is a high-impact, high-visibility position that will help shape the next chapter of online and hybrid learning at UVA and potentially serve as a model for the sector.

    Q: Where does the role sit within the university structure? How will the person in this role engage with other units and leaders across campus?

    A: This role sits within the provost’s office and reports directly to the vice provost for online education and digital innovation. The director will guide UVA schools and institutes through the planning, launch and evaluation of new online and hybrid programs, serving as a trusted partner to deans, associate deans, program directors and faculty.

    This individual will bring structure and strategy to UVA’s online growth, helping schools scope opportunities, assess market demand, support business case development and build the readiness needed for sustained success. The role requires exceptional communication, diplomacy and systems-level thinking to align multiple stakeholders around a shared vision.

    Q: What would success look like in one year? Three years? Beyond?

    A: In service of the vision articulated in the 2030 Plan and aligned to the strategic goals of our partner schools and institutes, UVA is undertaking ambitious growth in its online and hybrid portfolio. In the first year, success means ensuring active projects move from planning to launch with clarity and momentum, establishing shared frameworks, timelines and accountability across partners.

    Within three years, success will be measured not only in the number of successful program launches but also in the maturity of UVA’s internal systems, talent and decision-making processes that enable continued agility and innovation.

    Longer term, the director will help institutionalize a robust, repeatable, data-informed model for program development so UVA’s schools can innovate faster and with greater confidence, while ensuring that all programs uphold UVA’s reputation for academic excellence.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: Because this individual will be deeply engaged in all aspects of online program design, development and launch, he or she will gain substantial experience working with deans, faculty and other senior leaders. This experience would help set up future leadership roles in online education and digital innovation or in the private sector.

    This role offers a rare opportunity to operate at the heart of institutional transformation—building systems and partnerships that inform how UVA advances its mission as we begin our third century as a leading public institution. The experience will prepare the director for senior university leadership roles in strategy, academic innovation or digital transformation. It will equip them with the cross-sector perspective and executive acumen valued by both higher education and mission-driven organizations beyond academia.

    Please get in touch if you are conducting a job search at the intersection of learning, technology, and organizational change. If your gig is a good fit, featuring your gig on Featured Gigs is free.

    Source link

  • How Colleges Use Anti-Elitist and Elite-Adjacent Campaigns

    How Colleges Use Anti-Elitist and Elite-Adjacent Campaigns

    Wikipedia

    Two university campaigns hit the national spotlight in recent weeks. Each tells a very different story about how colleges market themselves.

    Colorado Mesa University’s new Featherstone University spoof takes aim at elite school stereotypes, ending with the line “We care about who you are, not who you know.”

    Days later, The Wall Street Journal profiled High Point University in a turnaround story built on private wealth and exclusivity. Its campus features etiquette lessons, manicured gardens and an airplane cabin for networking drills. HPU prepares students for a world where who you know still matters.

    In an industry criticized for sameness, both CMU and HPU stand out as strategic outliers.

    Trust, Value and the Split in Demand

    Public trust in higher education is fragile. Concerns over cost, access and free speech have left families asking if it is worth it. Against this backdrop, two playbooks are emerging: anti-elitist authenticity and elite-adjacent experience.

    Playbook A: CMU’s Skepticism as Fuel

    Colorado Mesa University’s “Welcome to Featherstone” flips elite-school marketing on its head. The parody ends with a challenge: “We don’t care about who you know. We care about you.”

    For a public university serving rural, first-generation, working-class students, the message fits. CMU has built its brand on affordability, access and trust by cutting tuition, growing CMU Tech and guaranteeing free tuition for Colorado families earning $70,000 or less.

    This isn’t simply mocking the elite; it’s segmentation. CMU speaks to families who see higher education as a bridge, not a birthright. In a sea of interchangeable ads, it uses satire to say, “We hear your skepticism—and we’re still here for you.”

    A Take From Rural America

    CMU’s approach hit a nerve, but it also hit a truth.

    I was born in East Detroit, then raised in Richmond, Mich., a farming town of 4,000. When my parents learned our local high school wasn’t accredited, they sent my brothers and me to school an hour away. At that time, only 32 percent of the local high school graduates pursued college. I still remember junior high classmates missing school to plant and harvest corn and soybeans.

    For rural communities like these, college can feel distant—financially and culturally. CMU’s campaign speaks to them with rare honesty.

    Playbook B: High Point’s Experience as Advantage

    If CMU sells authenticity, High Point sells aspiration. Its campus hums with classical music and fountains, lined with rocking chairs and gardens designed for conversation. Students dine in on-campus restaurants that double as lessons in professional etiquette, and housing options range from traditional dorms to $40,000 tiny homes.

    President Nido Qubein calls it preparation, not pampering: “Half of Wall Street sends their kids here.” The model caters to families who can pay full price and want an environment that mirrors the careers their children expect to enter.

    It’s not subtle, but it shows the university understands its target audience. In an uncertain marketing environment, HPU is selling a vision of success that feels polished, predictable and safe.

    What the Models Reveal

    CMU and HPU reveal opposite, equally intentional strategies. CMU doubled down on affordability with its 2024 CMU Promise Tour, which reached 22 rural and urban communities, boosting first-year enrollment by 25 percent. HPU, meanwhile, courts families buying access and advantage through concierge-level amenities.

    CMU uses satire to mock exclusivity; HPU leans into luxury to promise it. Both know exactly whom they’re speaking to.

    Leadership Takeaways

    In a landscape of sameness and skepticism, higher ed leaders should ask, “What do we stand for—and how do we prove it?”

    Is it belonging and mobility like CMU, or exclusivity and polish like HPU? Either can work if it’s backed by programs, outcomes and transparency. Whatever your promise, ensure the experience delivers it.

    Both institutions have likely alienated some audiences, but they’ve connected deeply with their own. That’s the point of strategic marketing. Their playbooks, while different, seem to be working for Colorado Mesa and High Point, which both had record enrollments in fall 2025 amid national headlines warning of a demographic cliff.

    Beyond the Marketing

    Beyond the spotlight, both universities must prove results. Time and measurement will tell if they are delivering on access and affordability, or on postgraduate success and networks.

    Authenticity carries risk, as organizational psychologist Adam Grant recently noted in a New York Times op-ed, but when outcomes match promises, both models can be legitimate. Hide results or exaggerate benefits and either fails the test of ethics and equity.

    In a nation this diverse, there is no single market for higher ed—there are many markets. And in a landscape this stratified, the unforgivable sin isn’t satire or spectacle; it’s sameness without substance.

    Maria Kuntz is director of content marketing strategy and communications at the University of Colorado–Boulder. She leads content strategy for advancement, oversees the award-winning Coloradan alumni magazine and writes about storytelling, leadership and trust in higher education.

    Source link

  • AAUP President Exacerbated “Organizational Antisemitism”

    AAUP President Exacerbated “Organizational Antisemitism”

    U.S. Senate Committee on Health, Education Labor and Pensions

    In a letter to American Federation of Teachers president Randi Weingarten, Sen. Bill Cassidy, the Louisiana Republican who chairs the education committee, accused American Association of University Professors president and AFT vice president Todd Wolfson of promoting “organizational antisemitism” within the AAUP. 

    Cassidy cited an August Inside Higher Ed interview with Wolfson in which the union leader stood against sending weapons to Israel, accused the Trump administration of weaponizing antisemitism for political gains and advocated for the Jerusalem Declaration on Antisemitism, a definition of antisemitism that does not include anti-Zionism.

    Cassidy also referenced a statement from Wolfson calling Vice President JD Vance a fascist as well as a March letter to the AAUP from the Anti-Defamation League and Academic Engagement Network that said “the AAUP [is] being perceived as increasingly moving in a virulently anti-Israel direction, and as a result, growing insensitive and even hostile to the concerns of its Jewish and Zionist members.”

    “In the six months since he received this warning from one of the nation’s leading organizations dedicated to fighting antisemitism [ADL], Dr. Wolfson has not only failed to address these concerns but has exacerbated them,” Cassidy wrote. “Jewish faculty members deserve to carry out their work free from discrimination. As an association with a national presence, it is concerning that AFT has not only failed to help solve this problem but has made it worse by allowing Dr. Wolfson to continue to serve in a leadership role.”

    The AAUP is an affiliate of the AFT, one of the largest unions nationwide for K–12 and higher education professionals. The two became formally affiliated in 2022 and share some leadership, including Wolfson.

    Wolfson replied to Cassidy’s letter in a statement to Inside Higher Ed Monday.

    “It appears Senator Cassidy and his GOP colleagues are furious that seven universities have rejected Trump’s absurd Higher Ed Loyalty Oath. Rather than reckon with their failed attempt to strong-arm higher education, they’ve chosen to complain to our national affiliate, AFT, because AAUP dared to hold a webinar,” Wolfson wrote, referring to an AAUP webinar called “Scholasticide in Palestine” that Cassidy referenced in the letter. “I would respectfully suggest they spend less time trying to undermine my constitutional rights and more time focusing on what Americans actually care about—like reopening the government, lowering healthcare costs, and addressing the cost-of-living crisis.”

    Cassidy wants Weingarten to tell him by Nov. 6 how AFT is addressing the concerns raised by the ADL and to share more details about how she’s working with the AAUP to ensure Jewish members aren’t experiencing antisemitism. He also asked Weingarten whether AFT publicly condemns Wolfson’s remarks.

    Source link