Tag: Higher

  • The Hidden Curriculum of Student Conduct Proceedings

    The Hidden Curriculum of Student Conduct Proceedings

    For first-generation students, the hidden curriculum—the unstated norms, policies and expectations students need to know in higher education—can be a barrier to participating in high-impact practices, leaving them in the dark about how to thrive in college.

    But new research aims to identify the lesser-known policies that disadvantage first-generation students and to make them more accessible. During a panel presentation at NASPA’s Student Success in Higher Education conference in June, Kristin Ridge, associate dean of students and community standards at the University of Rhode Island, discussed her doctoral research on first-generation students and how they interact with the student handbook and conduct spaces on campus.

    What’s the need: First-generation students make up 54 percent of all undergraduates in the U.S., or about 8.2 million students. But only one in four first-generation students graduates with a college degree, compared to nearly 60 percent of continuing-generation students.

    First-generation students are often diverse in their racial and ethnic backgrounds and come with a variety of strengths, which academic Tara Yosso describes as the cultural wealth model. But in some areas, including higher ed’s bureaucratic processes, first-gen students can lack family support and guidance to navigate certain situations, Ridge said. Her personal experience as a first-generation learner and a conduct officer pushed her to research the issue.

    “It really came to a head when I was dealing with two students who had a similar circumstance, and I felt like one had a better grasp of what was going on than the other one, and that was something that didn’t sit right with me,” Ridge said. “I felt like the behavior should be what I am addressing and what the students are learning from, not their previous family of origin or lived experience.”

    Conduct systems are complicated because they require a fluency to navigate the bureaucracy, Ridge said. Student handbooks are often written like legal documents, but the goal of disciplinary proceedings is for students to learn from their behavior. “If a student doesn’t understand the process or the process isn’t accessible to them, there are very real consequences that can interrupt their educational journey,” she added.

    Some states require conduct sanctions to be placed on a student’s transcript or a dean’s report for transfer application. These sanctions can result in debt, stranded credits or underemployment if students are unable to transfer or earn a degree.

    “Sometimes [continuing-generation] students who have parents or supporters can better understand what the implications of a sanction would be,” Ridge says. “Students who don’t have that extra informed support to lean on may unwittingly end up with a sanction that has more long-term impact than they realize.”

    First-generation students may also experience survivor’s or breakaway guilt for having made it to college, which can result in them being less likely to turn to their families for help if they break the student code of conduct or fear they will be expelled for their actions, Ridge said.

    Therefore, colleges and universities should seek to create environments that ensure all students are aware of conduct procedures, the content of the student handbook and how to receive support and advocacy from both the institution and their communities, Ridge said.

    Creating solutions: Some key questions conduct staff members can ask themselves, Ridge said, include:

    • Is the handbook easy to access, or is it hidden behind a login or pass code? If students or their family members or supporters have to navigate additional steps to read the student handbook, it limits transparency and opportunities for support.
    • Is content available in plain English or as an FAQ page? While institutions must outline some expectations in specific language for legal reasons, ensuring all students understand the processes increases transparency. “I like to say I want [students] to learn from the process, not feel like the process happened to them,” Ridge said.
    • Is the handbook available in other languages? Depending on the student population, offering the handbook in additional languages can address equity concerns about which families can support their students. Hispanic-serving institutions, for example, should offer the handbook in Spanish, Ridge said.
    • Who is advocating for students’ rights in conduct conversations? Some institutions offer students a conduct adviser, which Ridge says should be an opt-in rather than opt-out policy.
    • Is conduct addressed early in the student experience? Conduct is not a fun office; “no one’s going to put us on a parade float,” Ridge joked. That’s why it’s vital to ensure that students receive relevant information when they transition into the institution, such as during orientation. “My goal is for them to feel that they are holding accountability for their choices, that they understand and learn from the sanctions or the consequences, but I don’t want them to be stressed about the process,” Ridge said. Partnering with campus offices, such as TRIO or Disability Services, can also ensure all students are aware of conduct staff and the office is seen less as punitive.

    If your student success program has a unique feature or twist, we’d like to know about it. Click here to submit.

    Source link

  • Essay on Faculty Engagement and Web Accessibility (opinion)

    Essay on Faculty Engagement and Web Accessibility (opinion)

    Inaccessible PDFs are a stubborn problem. How can we marshal the energy within our institutions to make digital course materials more accessible—one PDF, one class, one instructor at a time?

    Like many public higher education institutions, William & Mary is working to come into compliance with the Web Content Accessibility Guidelines by April 2026. These guidelines aim to ensure digital content is accessible for people who rely on screen readers and require that content be machine-readable.

    Amid a flurry of other broad institutional efforts to comply with the federal deadline, my colleague—coordinator of instruction for libraries Liz Bellamy—and I agreed to lead a series of workshops designed to help instructors improve the accessibility of their digital course materials. We’ve learned a lot along the way that we hope can be instructive to other institutions engaged in this important work.

    What We’ve Tried

    Our first big hurdle wasn’t technical—it was cultural, structural and organizational. At the same time various groups across campus were addressing digital accessibility, William & Mary had just moved our learning management system from Blackboard Learn to Blackboard Ultra, we were beginning the rollout of new campuswide enterprise software for several major institutional areas, the institution achieved R-1 status and everyone had so many questions about generative AI. Put plainly, instructors were overwhelmed, and inaccessible PDFs were only one of many competing priorities vying for their attention.

    To tackle the issue, a group of institutional leaders launched the “Strive for 85” campaign, encouraging instructors to raise their scores in Blackboard Ally, which provides automated feedback to instructors on the accessibility of their course materials, to 85 percent or higher. The idea was simple—make most course content accessible, starting with the most common problem: PDFs that are not machine-readable.

    We kicked things off at our August 2024 “Ready, Set, Teach!” event, offering workshops and consultations. Instructors learned how to find and use their Ally reports, scan and convert PDFs, and apply practical strategies to improve digital content accessibility. In the year that followed, we tried everything we could think of to keep the momentum going and move the needle on our institutional Ally score above the baseline. Despite our best efforts, some approaches fell flat:

    • Let’s try online workshops! Low engagement.
    • What about in-person sessions? Low attendance.
    • But what if we feed them lunch? Low attendance, now with a fridge full of leftovers.
    • OK, what if we reach out to department chairs and ask to speak in their department meetings? It turns out department meeting agendas are already pretty full; response rates were … low (n = 1).

    The truth is, instructors are busy. Accessibility often feels like one more thing on an already full plate. So far, our greatest success stories have come from one-on-one conversations and by identifying departmental champions—instructors who will model and advocate for accessible practices with discipline-specific solutions. (Consider the linguistics professor seeking an accurate 3-D model of the larynx collaborating with a health sciences colleague, who provided access to an interactive model from an online medical textbook—enhancing accessibility for students learning about speech production.)

    But these approaches require time and people power we don’t always have. Despite the challenges we’ve faced with scaling our efforts, when success happens, it can feel a little magical, like the time at the end of one of our highly attended workshops (n = 2) when a previously skeptical instructor reflected, “So, it sounds like accessibility is about more than students with disabilities. This can also help my other students.”

    What We’ve Learned

    Two ingredients seem essential:

    1. Activation energy: Instructors need a compelling reason to act, but they also need a small step to get started; otherwise, the work can feel overwhelming.

    Sometimes this comes in the form of an individual student disclosing their need for accessible content. But often, college students (especially first year or first generation) don’t disclose disabilities or feel empowered to advocate for themselves. For some instructors, seeing their score in Ally is enough of a motivation—they’re high achievers, and they don’t want a “low grade” on anything linked to their name. More often, though, we’ve seen instructors engage in this work because a colleague or department chair tells them they need to. Leveraging positive peer pressure, coupled with quick practical solutions to improve accessibility, seems to be an effective approach.

    1. Point-of-need support: Help must be timely, relevant and easy to access.

    When instructors feel overwhelmed by the mountain of accessibility recommendations in their Ally reports, they are often hesitant to even get started. We’ve found that personal conversations about student engagement and course content or design often provide an opening to talk about accessibility. And once the door is open, instructors are often very receptive to hearing about a few small changes they can make to improve the accessibility of their course content.

    Where Things Stand

    Now for the reality check. So far, our institutional Ally score has been fairly stagnant; we haven’t reached the 85 percent goal we set for ourselves. And even for seasoned educational developers, it can be discouraging to see so little change after so much effort. But new tools offer hope. Ally recently announced planned updates to allow professors to remediate previously inaccessible PDFs directly in Blackboard without having to navigate to another platform. If reliable, this could make remediation more manageable, providing a solution at the point of need and lowering the activation energy required to solve the problem.

    We’re also considering:

    • Focus groups to better understand what motivates instructors to engage in this work.
    • Exploring the effectiveness of pop-up notifications that appear with accessibility tips and reminders when instructors log in to Blackboard to raise awareness and make the most of point-of-need supports.
    • Defining “reasonable measures” for compliance, especially for disciplines with unique content needs (e.g., organic chemistry, modern languages and linguistics).

    Leading With Empathy

    One unintended consequence we’ve seen: Some instructors are choosing to stop uploading digital content altogether. Faced with the complexity of digital accessibility requirements, they’re opting out rather than adapting. Although this could help our institutional compliance score, it’s often a net loss for students and for learning, so we want to find a path forward that doesn’t force instructors to make this kind of choice.

    Accessibility is about equity, but it’s also about empathy. As we move toward 2026, we need to support—not scare—instructors into compliance. Every step we make toward increased accessibility helps our students. Every instructor champion working with their peers to find context-specific solutions helps further our institutional goals. Progress over perfection might be the only sustainable path forward.

    Source link

  • Keep in Mind That AI Is Multimodal Now

    Keep in Mind That AI Is Multimodal Now

    Remember in late 2022 when ChatGPT arrived on the international scene and you communicated with AI through a simple chat bot interface? It was remarkable that you could type in relatively short prompts and it would instantly type back directly to you—a machine with communication capability!

    For most of us, this remains the most common daily mode of accessing and utilizing AI. Many of us are using AI only as a replacement for Google Search. In fact, Google Search AI Overviews, are now a standard feature, which was announced last year for a significant portion of users and search queries. They appear at the top of the results, and only after allowing you to follow up with a deeper dive are you taken to the old list of responses. As of mid-June 2025, the rollout of AI Overviews has progressed to the point where these overviews are a common sight at the top of search results pages. Yet the whole world of communication is open now for most of the frontier models of AI—and with the new communication modes comes a whole world of possibilities.

    In order to more fully utilize the remarkable range of capabilities of AI today, we need to become comfortable with the many input and output modes that are available. From audio, voice, image and stunning video to massive formally formatted documents, spreadsheets, computer code, databases and more, the potential to input and output material is beyond what most of us take for granted. That is not to mention the emerging potential of embodied AI, which includes all of these capabilities in a humanoid form, as discussed in this column two weeks ago.

    So, what can AI do with images and videos? Of course, you can import images as still photographs and instruct AI to edit the photos, adding or deleting objects within the image. Many apps do this exceptionally well. This does raise questions about deepfakes, images that can be shared as if they were real, when actually they are altered by AI in an attempt to mislead the public. Most such images do carry a watermark that indicates the image was generated or altered by AI. However, there are watermark removers that will wash away those well-intended alerts.

    One example of using the image capability of AI is in the app PictureThis, which describes itself as a “botanist in your pocket.” As one would expect, you can upload a picture from your smartphone and it will identify the plant. It will also provide a diagnosis of any conditions or diseases that it can determine through the image, offer care suggestions such as optimal lighting and watering, point out toxicity to humans and pets, and provide tips on how to help your plant thrive. In education, we can utilize AI to provide these kinds of services to learners who simply take a snapshot of their work.

    We can build upon the PictureThis example to create a kind of “professor in your pocket” that offers enhanced responses to images that, for example, might include an attempt to solve a mathematical problem, develop a chemistry formula, create an outline for an essay and much more. The student may simply take a smartphone or screenshot of their work and share it with the app, which will respond with what may be right and wrong in the work as well as give ideas of further research and context that will be helpful.

    Many of us are in positions where we need to construct spreadsheets, PowerPoint presentations and more formal reports with cover pages, tables of contents, citations and references. AI stands ready to convert data, text and free-form writing into perfectly formatted final products. Use the upload icon that is commonly located near the prompt window in ChatGPT, Gemini, Claude or other leading models to upload your material for analysis or formatting. Gemini, a Google product, has direct connections with Google apps.

    Many of these features are available on the free tier of the products. Most major AI companies have a subscription tier for around $20 per month that provides limited access to higher levels of their products. In addition, there are business, enterprise, cloud and API levels that serve organizations and developers. As a senior fellow conducting research, I maintain a couple of subscriptions that enable me to seamlessly move through my work process from ideation to creation of content, then from content creation to enhancement of research inserting creative concepts and, finally, to develop a formal final report.

    Using the pro versions gives access to deep research tools in most cases. This mode provides far more “thinking” by the AI tool, which can provide more extensive web-based research, generate novel ideas and pursue alternative approaches with extensive documentation, analysis and graphical output in the form of tables, spreadsheets and charts. Using a combination of these approaches, one can assemble a thoughtful deep dive into a current or emerging topic.

    AI can also provide effective “brainstorming” that integrates deep insights into the topics being explored. One currently free tool is Stanford University’s Storm, a research prototype that supports interactive research and creative analyses. Storm assists with article creation and development and offers an intriguing roundtable conversation that enables several virtual and human participants to join in the brainstorming from distant locations.

    This has tremendous potential for sparking interactive debates and discussions among learners that can include AI-generated participants. I encourage faculty to consider using this tool as a developmental activity for learners to probe deeply into topics in your discipline as well as to provide experience in collaborative virtual discussions that presage experiences they may encounter when they enter or advance in the workforce.

    In general, we are underutilizing not only the analytical and composition capabilities of AI, but also the wealth of multimode capabilities of these tools. Depending upon your needs, we have both input and output capabilities in audio, video, images, spreadsheets, coding, graphics and multimedia combinations. The key to most effectively developing skill in the use of these tools is to incorporate their time-saving and illustrative capabilities into your daily work.

    So, if you are writing a paper and have some data to include, try out an AI app to generate a spreadsheet and choose the best chart to further clarify and emphasize trends. If you need a modest app to perform a repetitive function for yourself or for others, for example, generating mean, mode and standard deviation, you can be helped by describing the inputs/outputs to AI and prompt it to create the code for you. Perhaps you want to create a short video clip as a simulation of how a new process might work; AI can do that from a description of the scene that you provide. If you want to create a logo for a prospective project, initiative or other activity, AI will give you a variety of custom-created logos. In all cases, you can ask for revisions and alterations. Think of AI as your dedicated assistant who has multimedia skills and is eager to help you with these tasks. If you are not sure how to get started, of course, just ask AI.

    Source link

  • Higher education leadership is at an inflection point – we must transform, or be transformed

    Higher education leadership is at an inflection point – we must transform, or be transformed

    At a recent “fireside chat” at a sector event, after I had outlined to those present some details of the transformational journey the University of East London (UEL) has been on in the past six years, one of those attending said to me: “Until UEL has produced Nobel Prize winners, you can’t say it has transformed.”

    While I chose not to address the comment immediately – the sharp intake of breath and rebuttals that followed from other colleagues present seemed enough at the time – it has played on my mind since.

    It wasn’t so much the comment’s narrow mindedness that shocked, but the confidence with which it was delivered. Yet, looking at the ways in which we often celebrate and highlight sector success – through league tables, mission groups, or otherwise – it is little wonder my interlocutor felt so assured in his worldview.

    Value judgement

    This experience leads me to offer this provocation: as a sector, many of our metrics are failing us, and we must embrace the task of redefining value in 21st century higher education with increased seriousness.

    If you disagree, and feel that traditional proxies such as the number of Nobel Prizes awarded to an institution should continue to count as the bellwethers for quality, you may wish to pause and consider a few uncomfortable truths.

    Yes, the UK is a global leader in scientific excellence. But we are also among the worst in the OECD for translating that science into commercial or productivity gains. The UK is a leading global research hub, producing 57 per cent more academic publications than the US in per capita terms. Yet compared to the US, the UK lags significantly behind in development and scale-up metrics like business-funded R&D, patents, venture capital and unicorns.

    Universities have been strongly incentivised to increase research volume in recent years, but as the outgoing chief executive of UKRI Ottoline Leyser recently posited to the Commons Science, Innovation and Technology committee do we need to address this relatively unstrategic expansion of research activity across a range of topics, detached from economic growth and national priorities? Our global rankings – built on proxies like Nobel Prizes – are celebrated, while our real-world economic outcomes stagnate. We excel in research, yet struggle in relevance. That disconnect comes at a cost.

    I recently contributed to a collection of essays on entrepreneurial university leadership, edited by Ceri Nursaw and published by HEPI – a collection that received a somewhat critical response in the pages of Research Professional, with the reviewer dismissing the notion of bold transformation on the basis that: “The avoidance of risk-taking is why universities have endured since the Middle Ages.”

    Yes. And the same mindset that preserved medieval institutions also kept them closed to women, divorced from industry, and indifferent to poverty for centuries. Longevity is not the same as leadership – and it’s time we stopped confusing the two. While we should all be rightfully proud of the great heritage of our sector, we’re at real risk of that pride choking progress at a critical inflection point.

    Lead or be led

    Universities UK chief executive Vivienne Stern’s recent keynote at the HEPI Annual Conference reminded us that higher education has evolved through tectonic shifts such as the industrial revolution’s technical institutes, the social revolution that admitted women, the 1960s “white heat” of technological change, and the rise of mass higher education.

    Now we are on the edge of the next seismic evolution. The question is: will the sector lead it, or be shaped by it? At the University of East London, we’ve chosen to lead by pressing ahead with a bold transformation built on a central premise that a careers-first approach can drive success in every part of the university – not on precedents that leave us scrambling for relevance in a changing world.

    Under this steam, we’ve achieved the UK’s fastest, most diversified, debt-free revenue growth. We’ve become an engine of inclusive enterprise, moving from 90th to 2nd in the UK for annual student start-ups in six years, with a more than 1,000 per cent increase in the survival of student-backed businesses. We’ve overseen a 25-point increase in positive graduate outcomes – the largest, fastest rise in graduate success – as well as ranking first in England for graduating students’ overall positivity. We use money like we use ideas: to close gaps, not widen them. To combat inequality, not entrench it.

    So, let me return to the Nobel Prize comment. The metrics that matter most to our economy and society, the achievements that tangibly improve lives, are not displayed in glass cabinets – rather those that matter most are felt every day by every member of our society. Recent polling shows what the public wants from growth: improved health and wellbeing, better education and skills, reduced trade barriers. Our government’s policy frameworks – from the industrial strategy to the AI strategy – depend on us as a sector to deliver those outcomes.

    Yet how well do our reputational rankings align with these national imperatives? How well does our regulatory framework reward the institutions that deliver on them? Are we optimising for prestige – or for purpose? We are living at a pivot point in history. The institutions that thrive through it will not be those that retreat into tradition. They will be those that rethink leadership, rewire purpose, and reinvent practice.

    Too much of higher education innovation is incremental; transformational innovation is rare. But it is happening – if we choose to see it, support it, and scale it. I urge others to join me in making the case for such a choice, because the next chapter of higher education will be written by those who act boldly now – or rewritten for those who don’t.

    Source link

  • Higher Education Inquirer : IMPORTANT INFO for Sweet v Cardona (now Sweet v McMahon) CLASS

    Higher Education Inquirer : IMPORTANT INFO for Sweet v Cardona (now Sweet v McMahon) CLASS

    Just dropping this IMPORTANT INFO from the DOE for Sweet v Cardona (now Sweet v McMahon) peeps who are CLASS – DECISION GROUPS and POST-CLASS.

    Edited To Add

    Decisions Class are streamlined R and R submissions.

    Post-class denials MUST ask the DOE for a reconsideration, which allows you to add additional evidence.

    Orginial Post:

    For REVISE and RESUBMITS (R and R) notices, the DOE is now saying that they WILL “disregard R and R*”* submissions if you EMAIL additional supporting documents or material. You CANNOT email the R and R back.

    You MUST submit a NEW BDTR APPLICATION and INCLUDE your previous BDTR application number which can be fund on the Denial letter.

    YOU HAVE 6 MONTHS TO RE-SUBMIT FROM THE RECEIPT OF THE R AND NOTICE (Here: https://studentaid.gov/borrower-defense**/

    **)

    The DOE states, “If you email supplemental information to the DOE or attempt to update your existing application, you will be treated as having failed to Revise and Resubmit”.

    ALSO, If you are still trying to add more evidence to your BDTR application this late in the game, you may want to wait for the decision letter to come out. We are reaching Group 5 Decision deadline, and Post-Class is 6 months after that. If you feel uneasy about your evidence, START collecting it now!

    Follow all DIRECTIONS on anything you get from the DOE relating to BDTR (except demanding payment, they can pound sand LOL).

    In Solidarity!!!

    Source link

  • Higher Education Inquirer : College Financial Aid: How It Really Works

    Higher Education Inquirer : College Financial Aid: How It Really Works

    Crucial Insights: Understanding College Financial Aid Dynamics

    (00:02:56) Variety of College Financial Assistance Options
    (
    00:05:18) Scholarships: Balancing Merit and Financial Need
    (
    00:10:00) Student Selection Strategies in College Admissions
    (
    00:21:40) Financial Aid Strategy at Competitive vs. Smaller Schools
    (
    00:26:29) Major-based Financial Aid Allocation in Colleges

    Source link

  • The Quick Convo All Writing Teams Should Have (opinion)

    The Quick Convo All Writing Teams Should Have (opinion)

    Scenario 1: You’re part of a cross-disciplinary group of faculty members working on the new general education requirement. By the end of the semester, your group has to produce a report for your institution’s administration. As you start to generate content, one member’s primary contributions focus on editing for style and mechanics, while the other members are focused on coming to an agreement on the content and recommendations.

    Scenario 2: When you’re at the stage of drafting content for a grant, one member of a writing team uses strikethrough to delete a large chunk of text, with no annotation or explanation for the decision. The writing stops as individual participants angrily back channel.

    Scenario 3: A team of colleagues decides to draft a vision statement for their unit on campus. They come to the process assuming that everyone has a shared idea about the vision and mission of their department. But when they each contribute a section to the draft, it becomes clear that they are not, in fact, on the same page about how they imagine the future of their unit’s work.

    In the best case scenarios, we choose people to write with. People whom we trust, who we know will pull their weight and might even be fun to work with. However, many situations are thrust upon us rather than carefully selected. We have to complete a report, write an important email, articulate a new policy, compose and submit a grant proposal, author a shared memo, etc., with a bunch of folks we would likely not have chosen on our own.

    Further, teams of employees tasked with writing are rarely selected because of their ability to write well with others, and many don’t have the language to talk through their preferred composing practices. Across professional writing and within higher education, the inability to work collaboratively on a writing product is the cause of endless strife and inefficiency. How can we learn how to collaborate with people we don’t choose to write with?

    Instead of just jumping into the writing task, we argue for a quick conversation about writing before any team authorship even starts. If time is limited, this conversation doesn’t necessarily need to be more than 15 minutes (though devoting 30 minutes might be more effective) depending on the size of the writing team, but it will save you time—and, likely, frustration—in the long run.

    Drawing from knowledge in our discipline—writing studies—we offer the following strategies for a guided conversation before starting any joint writing project. The quick convo should serve to surface assumptions about each member’s beliefs about writing, articulate the project’s goal and genre, align expectations, and plan the logistics.

    Shouldn’t We Just Use AI for This Kind of Writing?

    As generative AI tools increasingly become integrated into the writing process, or even supplant parts of it, why should people write at all? Especially, why should we write together when people can be so troublesome?

    Because writing is thinking. Certainly, the final writing product matters—a lot—but the reason getting to the product can be so hard is that writing requires critical thinking around project alignment. Asking AI to do the writing skips the hard planning, thinking and drafting work that will make the action/project/product that the writing addresses more successful.

    Further, we do more than just complete a product/document when we write (either alone or together)—we surface shared assumptions, we come together through conversation and we build relationships. A final written product that has a real audience and purpose can be a powerful way to build community, and not just in the sense that it might make writers feel good. An engaged community is important, not just for faculty and staff happiness, but for productivity, for effective project completion and for long-term institutional stability.

    Set the Relational Vibe

    To get the conversation started, talk to each other: Do real introductions in which participants talk about how they write and what works for them. Talk to yourself: Do a personal gut check, acknowledging any feelings/biases about group members, and commit to being aware of how these personal relationships/feelings might influence how you perceive and accept their contributions. Ideas about authorship, ownership and credit, including emotional investments in one’s own words, are all factors in how people approach writing with others.

    Articulate the Project Purpose and Genre

    Get on the same page about what the writing should do (purpose) and what form it should take (genre). Often the initial purpose of a writing project is that you’ve been assigned to a task—students may find it funny that so much faculty and staff writing at the university is essentially homework! Just like our students, we have to go beyond the bare minimum of meeting a requirement to find out why that writing product matters, what it responds to and what we want it to accomplish. To help the group come to agreement about form and writing conventions, find some effective examples of the type of project you’re trying to write and talk through what you like about each one.

    Align Your Approach

    Work to establish a sense of shared authorship—a “we” approach to the work. This is not easy, but it’s important to the success of the product and for the sake of your sanity. Confront style differences and try to come to agreement about not making changes to each other’s writing that don’t necessarily improve the content. There’s always that one person who wants to add “nevertheless” for every transition or write “next” instead of “then”—make peace with not being too picky. Or, agree to let AI come in at the end and talk about the proofreading recommendations from the nonperson writer.

    This raises another question: With people increasingly integrating ChatGPT and its ilk into their processes (and Word/Google documents offering AI-assisted authorship tools), how comfortable is each member of the writing team with integrating AI-generated text into a final product?

    Where will collaboration occur? In person, online? Synchronously or asynchronously? In a Google doc, on Zoom, in the office, in a coffee shop? Technologies and timing both influence process, and writers might have different ideas about how and when to write (ideas that might vary based on the tools that your team is going to use).

    When will collaboration occur? Set deadlines and agree to stick with them. Be transparent about expectations from and for each member.

    How will collaboration occur? In smaller groups/pairs, all together, or completely individually? How will issues be discussed and resolved?

    Finally, Some Recommendations on What Not to Do

    Don’t:

    • Just divvy up the jobs and call it a day. This will often result in a disconnected, confusing and lower-quality final product.
    • Take on everything because you’re the only one who can do it. This is almost never true and is a missed opportunity to build capacity among colleagues. Developing new skills is an investment.
    • Overextend yourself and then resent your colleagues. This is a surefire path to burnout.
    • Sit back and let other folks take over. Don’t be that person.

    Source link

  • AI, Irreality and the Liberal Educational Project (opinion)

    AI, Irreality and the Liberal Educational Project (opinion)

    I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.

    This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years.

    In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently.

    At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works.

    In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are.

    I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context.

    Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code.

    Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality.

    Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback.

    Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now.

    If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative.

    The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.)

    We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.”

    These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter.

    I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again.

    If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place.

    If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for.

    This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems?

    Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place.

    I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations.

    We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.

    Jacob Riyeff is a teaching associate professor and director of academic integrity at Marquette University.

    Source link

  • The state of the UK higher education sector’s finances

    The state of the UK higher education sector’s finances

    • Jack Booth and Maike Halterbeck at London Economics take a closer look at the recently published HESA Finance data to investigate the financial state of UK higher education.
    • At 11am today, we will host a webinar to mark the launch of the Unite Students Applicant Index. You can register for a free place here.

    In recent years, financial pressures have mounted across the entirety of the UK higher education (HE) sector, and have left many institutions in an exceptionally vulnerable position. In England alone, 43% of institutions are expected to face a financial deficit for 2024-25, prompting the House of Commons Education Select Committee to announce an inquiry into university finances and insolvency plans. Wide-ranging cost-cutting measures and redundancies are taking place across the sector, and the first institution (to our knowledge) has recently received emergency (bailout) funding from its regulator.

    With the recent release of the full HESA Finance data for 2023-24, we now have an updated picture of the scale of the financial challenges facing higher education providers (HEPs). London Economics analysed HEPs’ financial data between 2018-19 and 2023-24 to better understand the current financial circumstances of the sector.
     
    While other recent analyses focused on England only or covered other types of financial variables, here, we include providers across all of the UK and focus on three core financial indicators. 

    What does the analysis cover?

    Our analysis focuses on four broad clusters of HEPs, following the approach originally developed by Boliver (2015), which categorises a total of 126 providers according to differences in their research activity, teaching quality, economic resources, and other characteristics. Cluster 1 includes just two institutions: the University of Oxford and the University of Cambridge. Cluster 2 is composed mainly of other Russell Group universities and the majority of other pre-1992 institutions (totalling 39 institutions). Cluster 3 includes the remaining pre-1992 universities and most post-1992 institutions (67 institutions), and Cluster 4 consists of around a quarter of post-1992 universities (totalling 18 institutions). The latest HESA Finance data were, unfortunately, not available for 8 of these clustered institutions, meaning that our analysis covers 118 institutions in total.

    We focus on three key financial indicators (KFIs):

    1. Net cash inflow from operating activities after finance costs (NCIF). This measure provides a key indication of an institution’s financial health in relation to its day-to-day operations. Unlike the more common ‘surplus’/‘deficit’ measure, NCIF excludes non-cash items as well as financing-related income or expenditure.
    2. Net current assets (NCA), that is, ‘real’ reserves. This measure captures the value of current assets that can be turned into cash relatively quickly (i.e. in the short term, within 12 months), minus short-term liabilities.
    3. Liquidity days. This is based on the sum of NCA and NCIF, to evaluate whether institutions can cover operational shortfalls using their short-term resources. We then estimate the number of liquidity days each institution holds, defined as the number of days of average cash expenditure (excluding depreciation) that can be covered by cash and equivalents. The Office for Students requires providers to maintain enough liquid funds to cover at least 30 days’ worth of expenditures (excluding depreciation).

    What are the key findings?

    The key findings from the analysis are as follows:

    • In terms of financial deficits (NCIF), 40% of HEPs included in the analysis (47) posted a negative NCIF in 2023-24.
    • The average surplus across the institutions analysed (in terms of NCIF as a percentage of income) declined from 6.1% in 2018-19 to just 0.5% in 2023-24.
    • In terms of financial assets/resilience (NCA), 55% of HEPs analysed (65) saw a reduction in their NCA (as a proportion of their income) in 2023-24 as compared to 2018-19.
    • The decline in NCA has been particularly large in recent years, with average NCA declining from 27.4% of income in 2021-22 to 20.0% in 2023-24.
    • In terms of liquidity days, 20% of HEPs (24) had less than 30 days of liquidity in 2023-24, including 17 providers that posted zero liquidity days.

    A challenging time for the sector

    The analysis shows that the financial position of UK higher education institutions is worsening, with all three indicators analysed (i.e. NCIF, NCA, and liquidity days) showing a decline in providers’ financial stability. Major challenges to the sector’s finances are set to continue, especially as the UK government is looking to further curb net migration through potential additional restrictions on international student visas. Therefore, the financial pressures on UK HE providers are expected to remain significant.

    Want to know more?

    Our more detailed analysis, including a number of charts and additional findings on each indicator by university ‘cluster’, can be found on our website.

    Source link

  • REF panels must reflect the diversity of the UK higher education sector

    REF panels must reflect the diversity of the UK higher education sector

    As the sector begins to prepare for REF 2029, with a greater emphasis on people, culture and environment and the breadth of forms of research and inclusive production, one critical issue demands renewed attention: the composition of the REF panels themselves. While much of the focus rightly centres on shaping fairer metrics and redefining engagement and impact, we should not overlook who is sitting at the table making the judgments.

    If the Research Excellence Framework is to command the trust of the full spectrum of UK higher education institutions, then its panels must reflect the diversity of that spectrum. That means ensuring meaningful representation from a wide range of universities, including Russell Group institutions, pre- and post-92s, specialist colleges, teaching-led universities, and those with strong regional or civic missions.

    Without diverse panel representation, there is a real risk that excellence will be defined too narrowly, inadvertently privileging certain types of research and institutional profiles over others.

    Broadening the lens

    Research excellence looks different in different contexts. A university with a strong regional engagement strategy might produce research that is deeply embedded in local communities, with impacts that are tangible but not easily measured by traditional academic metrics, but with clear international excellence. A specialist arts institution may demonstrate world-leading innovation through creative practice that doesn’t align neatly with standard research output categories.

    The RAND report looking at the impact of research through the lens of the REF 2021 impact cases rightly recognised the importance of “hyperlocality” – and we need to ensure that research and impact is equally recognised in the forthcoming REF exercise.

    UK higher education institutions are incredibly diverse, with different institutions having distinct missions, research priorities, and challenges. REF panels that lack representation from the full spectrum of institutions risks bias toward certain types of research outputs or methodologies, particularly those dominant in elite institutions.

    Dominance of one type of institution on the panels could lead to an underappreciation of applied, practice-based, or interdisciplinary research, which is often produced by newer or specialist institutions.

    Fairness, credibility, and innovation

    Fair assessment depends not only on the criteria applied but also on the perspectives and experiences of those applying them. Including assessors from a wide range of institutional backgrounds helps surface blind spots and reduce unconscious bias. It also allows the panels to better understand and account for contextual factors, such as variations in institutional resources, missions, and community roles, when evaluating submissions.

    Diverse panels also enhance the credibility of the process. The REF is not just a technical exercise; it shapes funding, reputations, and careers. A panel that visibly includes internationally recognised experts from across the breadth of the sector helps ensure that all institutions – and their staff – feel seen, heard, and fairly treated, and that a rigorous assessment of UK’s research prowess is made across the diversity of research outputs whatever their form.

    Academic prestige and structural advantages (such as funding, legacy reputations, or networks) can skew assessment outcomes if not checked. Diversity helps counter bias that may favour research norms associated with more research established institutions. Panel diversity encourages broader thinking about what constitutes excellence, helping to recognize high-quality work regardless of institutional setting.

    Plus there is the question of innovation. Fresh thinking often comes from the edges. A wider variety of voices on REF panels can challenge groupthink and encourage more inclusive and creative understandings of impact, quality, and engagement.

    A test of the sector’s commitment

    This isn’t about ticking boxes. True diversity means valuing the insights and expertise of panel members from all corners of the sector and ensuring they have the opportunity to shape outcomes, not just observe them. It also means recognising that institutional diversity intersects with other forms of diversity, including protected characteristics, professions and career stage, which must also be addressed.

    The REF is one of the most powerful instruments shaping UK research culture. Who gets to define excellence in the international context has a profound impact on what research is done, how it is valued, and who is supported to succeed. REF panels should reflect the diversity of UK HEIs to ensure fairness, credibility, and a comprehensive understanding of research excellence across all contexts.

    If REF 2029 is to live up to the sector’s ambitions for equity, inclusion, and innovation, then we must start with its panels. Without diverse panels, the REF risks perpetuating inequality and undervaluing the full range of scholarly contributions made across the sector, even as it evaluates universities on their own people, culture, and environment. The composition of those panels will be a litmus test for how seriously we take those commitments.

    Source link