Shouldn’t College Be for Learning?

Communicate How Your Campus Connects Education to Careers

In a long, passionate, well-reasoned, thoroughly evidenced cri de coeur published at Current Affairs, San Francisco State professor Ronald Purser declares, “AI Is Destroying the University and Learning Itself.”

That attention-grabbing headline is a bit misleading, because as Purser makes clear in the article, it is not “AI” itself that is destroying these things. The source of the problem is human beings, primarily the human beings in charge of universities that have looked at the offerings from tech companies and, failing to recognize the vampire prepared to drain their institutions of their life force, not only invite them across the threshold but declare them their new bosom buddies.

Dartmouth University recently announced a deal with Anthropic/Amazon Web Services that university president Sian Beilock declared “is more than a collaboration.” The promises are familiar, using AI “to augment—not replace—student learning,” as though this is something we know how to do, and that this is best explored en masse across all aspects of the university simultaneously, rather than through careful experimentation. I think I understand some of the motivation to these kinds of deals—to seize some sense of agency in uncertain times—but the idea that even an institution as august as Dartmouth with such a long history in the development of artificial intelligence will be “collaborators” with these two entities is wishful thinking, IMO.

Purser’s piece details much of what I’ve heard in my travels from institution to institution to speak and consult on these issues. There is a lot of well-earned angst out there, particularly in places where administrations have made bets that look like a Texas Hold’em player pushing all in on a pair of eights. No consultation, no collaboration, no vision beyond vague promises of future abundance. A recent AAUP report stemming from a survey of 500 of its members shows that one of the chief fears of faculty is being sidelined entirely as administrations strike these deals.

This uninvited guest has thrown much of what we would consider the core purpose of the university in doubt. As Purser says, “Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education.”

While Purser’s account is accurate to a degree, I also want to say that it is not complete. As I wrote a couple of months ago, there are also great signs of progress in terms of addressing the challenges of the moment. The kind of administration and institutional carelessness that Purser documents is not universal, and even under those conditions, faculty and students are finding ways to do meaningful work. Many people are successfully addressing what I’ve long believed is the core problem, the “transactional model” of schooling that actively dissuades students from taking the required risks for learning and personal development.

One of the most frequent observations I’ve made in doing this work is that many, perhaps even most, students have no real enthusiasm for an AI-mediated future where their thoughts and experiences are secondary to the outputs of an LLM model. The fact that they find the model outputs useful in school contexts is the problem.

I was greatly cheered by this account from Matt Dinan, who details how he built the experiences of his course from root pedagogical values in a way that clearly signals to students the importance of doing the work for themselves, the importance of their thoughts and the sincere belief that taking a risk to learn is worth doing and well supported.

What we see is that success comes from giving instructors the freedom to work the problem under conditions that allow the problem to be solved. Note that this does not de facto require a rejection of AI. There’s plenty of room for those more interested in AI to explore its integration, but it does mean doing more than signaling to faculty and students, “You’re going to use AI and you’re going to like it.”

Much of what Purser describes is not only the imposition of AI, but the imposition of AI in a system that has been worn down through austerity measures over many decades, leaving it vulnerable to what is nothing more than an ideology promising increased efficiency and lower cost while still allowing the institutions to collect tuition revenue. This thinking reduces the “value proposition” of higher ed to its credentialing purpose.

I know that the popular image of colleges and universities is that they are slow to change, but I have actually been surprised at the speed at which many institutions are making this AI future bet, particularly when we don’t know what future we’re betting on.

Applying the tech ethos of “move fast and break things” to education has gained some traction because there is evidence to point toward and say, “This thing is already broken, so what do we have to lose?”

We could lose a lot—and lose it forever.

I remain open to the idea that generative AI and whatever comes after it can have positive effects on higher education, but I am increasingly convinced that when it comes to the experiences of learning, we know very little as to how this should be done. As Justin Reich wrote recently at The Chronicle, “stop pretending you know how to teach AI.”

We shouldn’t abandon the things we do know how to teach (like writing) while we experiment with this new technology. We shouldn’t dodge the structural barriers that Ronald Purser outlines in his piece, hoping for an AI savior around the corner. This isn’t what students want, it’s not what students need and it is not a way to secure an ongoing value proposition for higher education.

Source link