Tag: Generative

  • Generative Engine Optimization & GEO Keywords

    Generative Engine Optimization & GEO Keywords

    Reading Time: 18 minutes

    Search behaviour among prospective students is evolving fast. Instead of scrolling through pages of search results, many now turn to AI-powered tools for instant, conversational answers. This shift has introduced a new layer to traditional SEO: Generative Engine Optimization (GEO).

    GEO focuses on optimizing content so that generative AI search engines like ChatGPT or Google’s AI Overview can find, interpret, and feature it in their responses. In essence, GEO ensures your institution’s information is selected, summarized, or referenced in AI-generated answers, rather than simply ranking in a list of links.

    Higher education marketers in Canada and beyond must pay attention to this trend. Recent global studies indicate that nearly two-thirds of prospective students use AI tools such as ChatGPT at some stage of their research process, with usage highest during early discovery and comparison phases. 

    These tools pull content from across the web and present synthesized answers, often eliminating the need for users to click. This “zero-click” trend reduces opportunities for organic traffic, raising the stakes for visibility within AI systems.

    This guide explores GEO’s role in education marketing, how it differs from traditional SEO, and why it matters for student recruitment in the age of AI. You’ll find practical guidance on aligning your content with generative AI, from keyword strategy to page prioritization. We’ll also look at how to measure GEO’s impact on inquiries and enrolment, and share examples from institutions leading the way.

    AI is rewriting how students discover institutions.

    Partner with HEM to stay visible in the age of generative search.

    What Is Generative Engine Optimization (GEO) in Higher Education Marketing?

    Generative Engine Optimization (GEO) is the practice of tailoring university content for AI-driven search tools like ChatGPT and Google’s AI Overview. Unlike traditional SEO, which targets search engine rankings, GEO focuses on making content readable, reliable, and retrievable by generative AI.

    In higher ed, this means structuring key program details, admissions information, and differentiators so that AI tools can easily surface and cite them in responses. GEO builds on classic SEO principles but adapts them for a zero-click, conversational environment, ensuring your institution appears in AI-generated answers to prospective student queries.

    How Is GEO Different from Traditional SEO for Universities and Colleges?

    While both SEO and GEO aim to make your institution’s content visible, their approaches diverge in method and target. Traditional SEO is designed for search engine rankings. GEO, on the other hand, prepares content for selection and citation by AI tools that deliver instant answers rather than search results.

    Let’s break it down.

    Search Results vs. AI Answers
    SEO optimizes for clicks on a search results page. GEO optimizes for inclusion in a conversational answer. Instead of showing up as a blue link, your institution may be quoted or named by the AI itself.

    Keyword Strategy
    SEO prioritizes high-volume keywords. GEO relies on semantic relevance. Instead of “MBA program Canada,” think “How long is the MBA at [University]?” or “What are the admission requirements?”

    Content Structure
    Traditional SEO values user navigation. GEO values clarity for AI parsing. Bullet points, Q&A formatting, and schema markup make it easier for AI to extract information. Summary boxes and tables work better than long paragraphs.

    Authority Signals
    E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) still matters. But for GEO, authority is inferred from citation style, accuracy, and consistency, not design or branding. Highlighting faculty credentials or linking to research enhances AI credibility scoring.

    Technical Approach
    Both SEO and GEO require clean, crawlable websites. But GEO adds machine-readable formatting. Schema.org markups, downloadable data files, and clean internal linking increase your chances of being selected by AI.

    Measuring Success
    SEO measures traffic, rankings, and form fills. GEO measures citations in AI responses, brand mentions, and voice assistant visibility. You might not get the click, but you still win visibility if the AI says your name.

    In practice, this means layering GEO on top of existing SEO. A strong program page might combine narrative storytelling with a quick facts section. An admissions page should include both persuasive copy and an FAQ schema.

    Bottom line: SEO helps you get found. GEO helps you get cited. And in the age of AI, both are essential to capturing attention at every stage of the student search journey.

    Why GEO Matters for Student Recruitment in the Age of AI Search

    Why is GEO important for student recruitment in the age of AI search? Generative AI search is already reshaping how prospective students discover, evaluate, and select postsecondary institutions. GEO (Generative Engine Optimization) equips institutions to remain visible and competitive in this changing environment. Here’s why it matters now more than ever:

    1. Widespread Adoption by Gen Z

    Today’s students are early adopters of generative AI. A 2024 global survey found that approximately 70% of prospective students have used AI tools like ChatGPT to search for information, and more than 60% report using chatbots during the early phases of their college research. This shift means fewer students are navigating university websites as a first step. 

    Instead, they’re posing detailed questions to AI, questions about programs, financial aid, campus life, and more. GEO ensures your institution’s information is accessible, machine-readable, and accurate in this discovery environment. Without it, you risk being excluded from the initial consideration set.

    1. The Rise of Zero-Click Search Behavior

    AI-generated responses often satisfy a query without requiring a website visit. This zero-click trend is accelerating, as nearly 60% of searches now end without a click. If a student asks, “What are the top universities in Canada for engineering?” and an AI tool responds with a synthesized answer that names three schools, those schools have won visibility without needing a traditional click-through. 

    GEO is your institution’s strategy for occupying that limited space in the answer. It’s how you shape perceptions in a search landscape where attention is won before a student reaches your homepage.

    1. AI Is Becoming a College Advisor

    Though current data shows AI has limited direct influence on final enrollment decisions, that influence is growing. As AI tools become more trusted, students will increasingly rely on them for shortlisting programs or comparing institutions. GEO ensures your content is part of those suggestions and comparisons. 

    For example, a prospective student might ask, “Which is better for computer science, [Competitor] or [Your University]?” Without well-structured, AI-optimized content, your institution may be left out or misrepresented. GEO levels the playing field, ensuring that when AI generates side-by-side evaluations, your offerings are accurate, current, and competitive.

    1. Fewer Chances to Impress

    Traditional SEO offered multiple entry points: page one, page two, featured snippets, and ads. AI-generated answers are far more concise, often limited to a single paragraph or a brief list of citations. That means your institution must compete for a narrower spotlight. 

    GEO increases your odds of selection by helping AI tools find and cite the most relevant, structured, and authoritative content. When students ask about tuition, deadlines, or international scholarships, you want the answer to come from your website, not a third-party aggregator or a competing institution.

    1. Boosting Brand Trust and Authority

    Being cited in AI responses lends credibility. Much like appearing at the top of Google results used to signal trustworthiness, consistent AI mentions confer authority. If ChatGPT, Google SGE, or Bing AI repeatedly reference your institution in educational queries, students begin to perceive your brand as reliable. 

    This builds long-term recognition, resulting in some students visiting your site simply because they’ve encountered your name often in AI responses. GEO helps position your institution as a trusted source across AI-driven search platforms, reinforcing brand equity and enhancing recruitment outcomes.

    In Summary

    GEO is rapidly becoming a critical component of modern higher education student recruitment marketing strategies. It ensures your institution is visible in the conversational, AI-driven search experiences that are now shaping student decisions. Just as universities once adjusted to mobile-first web browsing, they must now adapt to AI-first discovery. 

    GEO helps your institution appear in AI answers, influence prospective students early in their journey, and remain top of mind even when clicks don’t happen. For institutions navigating declining enrollments and intensifying competition, GEO is a forward-facing strategy that keeps you in the conversation and in the race for the next generation of learners.

    How Can a University Website Be Optimized for AI Tools like ChatGPT and Google AI Overviews?

    Optimizing a university website for generative AI search requires a blend of updated content strategy, technical precision, and practical SEO thinking. The goal is to ensure your institution’s content is not only findable but also understandable and usable by AI models such as ChatGPT or Google’s AI Overviews. Here are two key strategies to implement:

    1. Embrace a Question-First Content Strategy Using GEO Keywords

    Begin by identifying the natural-language queries prospective students are likely to ask. Instead of traditional keyword stuffing, build your content around direct, conversational questions with what we call “geo keywords.” For example: “What is the tuition for [University]’s nursing program?” “Does [University] require standardized tests?”, or “What scholarships are available for international students?”

    Structure content using Q&A formats, headings, and short paragraphs. Include these questions and their answers prominently on program, admissions, or financial aid pages. FAQ sections are particularly effective since AI tools are trained on question-based formats and favor content with semantic clarity.

    Audit your current site to uncover missing or buried answers. Use data from tools like Google Search Console or internal search analytics to surface frequent queries. Then, present responses in clear formats that both users and AI systems can digest.

    2. Create Clear, Canonical Fact Pages for Key Information

    AI tools rely on consistency. If your website offers multiple versions of key facts, such as tuition, deadlines, or admission requirements, AI may dismiss your content entirely. To avoid this, create canonical pages that serve as the single source of truth for essential topics.

    For example, maintain a central “Admissions Deadlines” page with clearly formatted lists or tables for each intake period. Similarly, your “Tuition and Fees” page should break down costs by program, year, and student type.

    Avoid duplicating this information across many pages in slightly different wording. Instead, link other content back to these canonical pages to reinforce credibility and reduce confusion for both users and AI. By prioritizing clarity, structure, and authority, your website becomes significantly more AI-compatible.

    3. Structure Your Content for AI (and Human) Readability

    Generative AI reads websites the way humans skim for quick answers, only faster and more literal. For your institution to show up in AI-generated results, your site must be structured clearly and logically. Here are six modern content strategies that improve readability for both users and machines:

    1. Put Important Information Up Front

    AI tools often extract the first one or two sentences from a page when forming answers. Lead with essential facts: program type, duration, location, or unique rankings. For example:
    A four-year BSc Nursing program ranked top 5 in Canada for clinical placements.

    Avoid burying key points deep in your content. Assume the AI won’t read past the opening paragraph, and prioritize clarity early.

    2. Use Headings, Lists, and Tables

    Break up long content blocks using headings (H2s and H3s), bullet points, and numbered lists. These structures improve scanning and help AI identify and categorize information correctly.

    Instead of a paragraph on how to apply, write:

    How to Apply:

    1. Submit your online application
    2. Pay the $100 application fee
    3. Upload transcripts and supporting documents

    For data or comparisons, use simple tables. A table of admissions stats or tuition breakdowns is easier for AI to interpret than buried prose.

    3. Standardize Terminology Across Your Site

    Inconsistent language can confuse both users and AI. Choose one label for each concept and use it site-wide. For example, if your deadline page says “Application Deadline,” don’t refer to it elsewhere as “Closing Date” or “Due Date.”

    Uniform terminology supports clearer AI parsing and reinforces credibility.

    4. Implement Schema Markup

    Schema markup is structured metadata added to your HTML that explicitly communicates the purpose of your content. It is critical to make content machine-readable.

    Use JSON-LD and schema types like:

    • FAQPage for question-answer sections
    • EducationalOccupationalProgram for program details
    • Organization for your institution’s info
    • Event for admissions deadlines or open houses

    Google and other AI systems rely heavily on this data. Schema also helps with traditional SEO by enabling rich snippets in search results.

    5. Offer Machine-Readable Data Files

    Forward-looking universities are experimenting with downloadable data files (JSON, CSV) that list key facts, such as program offerings or tuition. These can be made available through a hidden “data hub” on your site.

    AI systems may ingest this structured content directly, improving the likelihood of accurate citations. For example, the University of Florida’s digital team reported that their structured content significantly improved the accuracy of Google AI Overviews summarizing their programs.

    4. Keep Content Fresh and Consistent Across Platforms

    AI tools favor accurate and current information. Outdated or conflicting content can lead to mistrust or exclusion. Best practices include:

    • Timestamping pages with “Last updated [Month, Year]”
    • Conducting regular audits to eliminate conflicting data
    • Using canonical tags to point AI toward the primary source when duplicate content is necessary
    • Aligning off-site sources like Wikipedia or school directory listings with your website’s data

    For instance, if your homepage says 40,000 students and Wikipedia says 38,000, the AI may average the two or cite the incorrect one. Keep external sources accurate and consistent with your site.

    5. Optimize for Specific AI Platforms (ChatGPT, Google SGE, etc.)

    Each AI platform has different behaviors. Here is how to tailor your content for them:

    ChatGPT (OpenAI)

    Free ChatGPT may not browse the web, but ChatGPT Enterprise and Bing Chat do. These versions often rely on training data that includes popular and high-authority content.

    To increase visibility:

    • Publish long-form, high-quality content that gets cited by others
    • Use backlink strategies to improve domain authority
    • Create blog posts or guides that answer common student questions clearly

    Even if your content isn’t accessed in real time, if it has been crawled or cited enough, it may be paraphrased or referenced in AI answers.

    Google AI Overview (formerly SGE)

    Google’s AI Overviews (formerly Search Generative Experience, or SGE draws from top-ranking search results. So, traditional SEO performance directly influences GEO success.

    Best practices include:

    • Use concise, answer-oriented snippets early in content (e.g., “General admissions require a 75% average and two references.”)
    • Ensure pages are crawlable and not blocked by scripts or logins
    • Reinforce AI clarity with schema and consistent internal linking

    Voice Assistants (Siri, Alexa, Google Assistant)

    These tools favor featured snippets and structured content. A direct response like: “Yes, we offer a co-op program as part of our Bachelor of Computer Science” is more likely to be read aloud than a paragraph with buried details.

    Emerging Tools (Perplexity.ai, Bing Chat)

    These newer AI search tools cite sources like Wikipedia and high-authority sites. To prepare:

    • Keep your institution’s Wikipedia page accurate and updated
    • Monitor and correct public conversations (e.g., Reddit, Quora) with official clarifications on your website
    • Consider publishing myth-busting content to preempt misinformation

    Structuring your content for AI doesn’t mean abandoning human readers. In fact, the best practices that help machines, clarity, structure, and accuracy, also create better experiences for prospective students. By aligning your strategy with the expectations of both audiences, your university remains visible, credible, and competitive in the evolving search landscape.

    6. Leverage Institutional Authority and Unique Content

    Your organization holds content assets that AI deems both authoritative and distinctive, be sure to leverage them strategically. Showcase faculty research, student success outcomes, and institutional data on your site in clear, extractable formats. For instance:
    “Over 95% of our graduates secure employment within six months (2024 survey).”

    Include program differentiators, accolades, and unique offerings that set your institution apart. AI-generated comparisons often cite such features. Strengthen content credibility with E-E-A-T principles:

    • Add author bylines and bios to expert-led blog posts
    • Cite trusted third-party sources and rankings
    • Present information factually while still engaging human readers

    For example, pair promotional language (“modern dorms”) with direct answers (“First-year students are required to live on campus”). This dual-purpose approach ensures your content feeds both AI responses and prospective student curiosity.
    In short, AI rewards clear, credible, question-first content. Make sure yours leads the conversation.

    Which Higher Education Pages Should Be Prioritized for GEO?

    Not all web pages carry equal weight when it comes to generative engine optimization (GEO). To improve visibility in AI-generated search responses, universities should prioritize content that addresses high-intent queries and critical decision-making touchpoints.

    1. Academic Program Pages
      These are foundational. When users ask, “Does [University] offer a data science degree?”, AI tools pull from program pages. Each page should clearly outline program type, duration, delivery mode, concentrations, accreditations, rankings, and outcomes. Include key facts in the opening paragraph and use structured Q&A to address specifics like “Is co-op required?” or “Can I study part-time?”
    2. Admissions Pages
      AI queries often focus on application requirements. Structure admissions pages by applicant type and use clear subheadings and bullet points to list requirements, deadlines, and steps. Include canonical deadline pages with visible timestamps, and FAQ-style answers such as “What GPA is required for [University]?”
    3. Tuition, Scholarships, and Financial Aid
      Cost-related questions are among the most common. Ensure tuition and fee data are presented in clear tables, by program and student type. Scholarship and aid pages should state eligibility, values, and how to apply in plain language, e.g., “All applicants are automatically considered for entrance scholarships up to $5,000.”
    4. Program Finders and Academic Overview Pages
      Ensure your program catalog and A–Z listings are crawlable, up-to-date, and use official program names. Pages summarizing academic strengths should highlight standout offerings: “Our business school is triple-accredited and ranked top 5 in Canada.”
    5. Student Life and Support Services
      AI often fields questions like “Is housing guaranteed?” or “What mental health resources are available?” Answer these directly: “All first-year students are guaranteed on-campus housing.” Showcase specific services for key demographics (e.g., international students, veterans) with quantifiable benefits.
    6. Career Outcomes and Alumni Success
      Publish recent stats and highlight notable alumni. Statements like “93% of our grads are employed within 6 months” or “Alumni have gone on to roles at Google and Shopify” provide AI with strong content to surface in answers.

    How Can Institutions Measure the Impact of GEO on Inquiries and Enrolment?

    Measuring the impact of Generative Engine Optimization (GEO) requires a mix of analytics, qualitative monitoring, and attribution strategies. Since GEO outcomes don’t always show up in traditional SEO metrics, institutions must adopt creative, AI-aware approaches to track effectiveness.

    1. Monitor AI Referral Traffic
      Check Google Analytics 4 (GA4) or similar platforms for referral traffic from AI tools like Bing Chat or Google SGE. While not all AI sources report referrals, look for domains like bard.google.com or bing.com and configure dashboards to track them. Even small traffic volumes from these sources can indicate growing visibility.
    2. Track AI Mentions and Citations
      Manually query AI tools using prompts like “Tell me about [University]” or “How do I apply to [University]?” and log whether your institution is cited. Note if AIs reference your site, Wikipedia, or other sources. Track frequency and improvements over time, especially following content updates. Screenshots and logs can serve as powerful internal evidence.
    3. Use Multi-Touch Attribution
      Students may not click AI links, but still recall your brand. Add “How did you hear about us?” options in inquiry forms, including “ChatGPT” or “AI chatbot.” Monitor brand search volume and direct traffic following GEO updates. Qualitative survey insights and CRM notes from admissions teams can help reveal hidden AI touchpoints.
    4. Analyze GEO-Optimized Page Engagement
      Watch how the pages you optimize for GEO perform. Increased pageviews, lower bounce rates, and higher conversion (e.g., info form fills) may indicate better alignment with AI outputs and human queries alike, even if AI is only part of the traffic source.
    5. Observe Funnel Shifts and Segment Trends
      Notice any spikes in inquiries for certain programs or demographics that align with AI visibility. For example, a rise in international applications after enhanced program content could suggest AI exposure.
    6. Build a GEO Dashboard
      Create simple internal dashboards showing AI referrals, engagement trends, citation screenshots, and timelines of GEO initiatives. Correlate those with enrollment movement when possible.
    7. Test, Refine, Repeat
      Experiment continuously. A/B test content formats, restructure FAQs, and see which phrasing AI picks up. Treat AI outputs as your new SEO testbed.

    While GEO analytics are still evolving, early movers gain visibility and mindshare. Measuring what’s possible now ensures institutions are positioned to lead as AI search reshapes student discovery.

    10 Global Examples of GEO in Practice (Higher Ed Institutions)

    1. Harvard University: Harvard College Admissions “Apply” Page

    Harvard’s undergraduate admissions Apply page (Harvard College) is a model of clear, structured content. The page is organized with intuitive section headings (e.g., Application Requirements, Timeline) and even an on-page table of contents for easy navigation.

    It provides a bullet-point list of all required application components (from forms and fees to test scores and recommendations), ensuring that key information is presented succinctly.

    HEM Blog Post Image 2HEM Blog Post Image 2

    Source: Harvard University

    2. Stanford University: First-Year Applicants “Requirements and Deadlines” Page

    Stanford’s first-year admission page stands out for its semantic, structured presentation of information. It opens with a clearly labeled checklist of Required Application Components, presented as bullet points (e.g., Common Application, application fee, test scores, transcripts, etc.). Following this, Stanford provides a well-organized Requirements and Deadlines table that outlines key dates for Restrictive Early Action and Regular Decision side by side.

    In this table, each milestone, from application submission deadlines (e.g., November 1 for early, January 5 for regular) to notification dates and reply deadlines, is neatly aligned, which is both user-friendly and easy for AI to parse.

    HEM Blog Post Image 3HEM Blog Post Image 3

    Source: Stanford University

    3. Massachusetts Institute of Technology (MIT): “About MIT: Basic Facts” Page

    MIT Admissions offers an About MIT: Basic Facts page that is essentially a treasure trove of quick facts and figures presented in bullet form. This page exemplifies GEO best practices by curating the institute’s key data points (e.g., campus size, number of students, faculty count, notable honors) as concise bullet lists under intuitive subheadings.

    For instance, the page lists campus details like acreage and facilities, student demographics, and academic offerings in an extremely scannable format. Each bullet is a self-contained fact (such as “Undergraduates: 4,576” or “Campus: 168 acres in Cambridge, MA”), making it ideal for AI summarization or direct answers. Because the content is broken down into digestible nuggets, an AI-powered search can easily extract specific information (like *“How many undergraduate students does MIT have?”) from this page.

    HEM Blog Post Image 4HEM Blog Post Image 4

    Source: MIT

    4. University of Toronto: Undergraduate “Dates & Deadlines” Page

    The University of Toronto’s Dates & Deadlines page for future undergraduates is a great example of structured scheduling information. It presents application deadlines in a highly structured list, broken down by program/faculty and campus. The page is organized into expandable sections (for full-time, part-time, and non-degree studies), each containing tables of deadlines.

    For example, under full-time undergraduate applications, the table clearly lists each faculty or campus (Engineering, Arts & Science – St. George, U of T Mississauga, U of T Scarborough, etc.) alongside two key dates: the recommended early application date and the final deadline. This means a prospective student can quickly find, say, the deadline for Engineering (January 15) and see that applying by November 7 is recommended.

    Such a format is not only user-friendly but also easy for AI to interpret. The consistency and labeling (e.g., “Applied Science & Engineering, November 7 (recommended) / January 15 (deadline)”) ensure that an AI answer to “What’s the application deadline for U of T Engineering?” will be accurate.

    HEM Blog Post Image 5HEM Blog Post Image 5

    Source: University of Toronto

    5. University of Oxford: English Language and Literature Course Page

    Oxford’s course page for English Language and Literature showcases GEO-friendly content right at the top with a concise Overview box. This section acts as a quick-reference summary of the course, listing crucial facts in a compact form. It includes the UCAS course code (Q300), the entrance requirements (AAA at A-level), and the course duration (3 years, BA) clearly on separate lines. Immediately below, it outlines subject requirements (e.g., Required: English Literature or English Lang/Lit) and other admission details like whether there’s an admissions test or written work, all in the same straightforward list format.

    This means a prospective student (or an AI summarizing Oxford’s offerings) can get all the key info about the English course at a glance – from how long it lasts to what grades are needed.

    HEM Blog Post Image 6HEM Blog Post Image 6

    Source: Oxford University

    6. University of Cambridge: Application Dates and Deadlines Page

    Cambridge’s admissions website provides a dedicated Application Dates and Deadlines page that reads like a detailed timeline of the entire admissions process. This page lays out, in chronological order, all the key steps and dates for applying to Cambridge, with each date accompanied by a short explanation of what happens or what is due.

    For example, it starts as early as the spring of the year before entry, noting when UCAS course search opens and when you can begin your UCAS application. Critically, it flags the famous 15 October UCAS deadline with emphasis: “15 October 2025 – Deadline to submit your UCAS application (6 pm UK time)”. Other entries include deadlines for supplemental forms like the My Cambridge Application (22 October), dates for admissions tests, and notes about interview invitations in November and December.

    HEM Blog Post Image 7HEM Blog Post Image 7

    Source: University of Cambridge

    Staying Discoverable in the Age of Generative Search

    Generative Engine Optimization (GEO) is rapidly shifting from trend to necessity in higher education marketing. As AI-driven platforms like ChatGPT, Google SGE, and voice assistants reshape how students seek information, institutions must adapt their content strategies accordingly. 

    By aligning with modern GEO practices, universities enhance both discoverability and user experience, meeting students where they are and ensuring their narratives are accurately represented. In today’s competitive enrolment landscape, GEO is not optional; it is foundational. The strategies outlined above provide a roadmap for sustainable visibility in the age of generative search. Continue refining your approach, and your institution will not just appear in AI responses; it will lead them. In this new era, the goal is simple: be cited, not sidelined.

    AI is rewriting how students discover institutions.

    Partner with HEM to stay visible in the age of generative search.

    FAQs

    Q: What is generative engine optimization (GEO) in higher education marketing?

    A: Generative Engine Optimization (GEO) is the practice of tailoring university content for AI-driven search tools like ChatGPT and Google’s AI Overview. Unlike traditional SEO, which targets search engine rankings, GEO focuses on making content readable, reliable, and retrievable by generative AI.

    Q: How is GEO different from traditional SEO for universities and colleges?

    A: While both SEO and GEO aim to make your institution’s content visible, their approaches diverge in method and target. Traditional SEO is designed for search engine rankings. GEO, on the other hand, prepares content for selection and citation by AI tools that deliver instant answers rather than search results.

    Q: Why is GEO important for student recruitment in the age of AI search?

    A: Generative AI search is already reshaping how prospective students discover, evaluate, and select postsecondary institutions. GEO (Generative Engine Optimization) equips institutions to remain visible and competitive in this changing environment.

    Source link

  • Using generative tools to deepen, not replace, human connection in schools

    Using generative tools to deepen, not replace, human connection in schools

    Key points:

    For the last two years, conversations about AI in education have tended to fall into two camps: excitement about efficiency or fear of replacement. Teachers worry they’ll lose authenticity. Leaders worry about academic integrity. And across the country, schools are trying to make sense of a technology that feels both promising and overwhelming.

    But there’s a quieter, more human-centered opportunity emerging–one that rarely makes the headlines: AI can actually strengthen empathy and improve the quality of our interactions with students and staff.

    Not by automating relationships, but by helping us become more reflective, intentional, and attuned to the people we serve.

    As a middle school assistant principal and a higher education instructor, I’ve found that AI is most valuable not as a productivity tool, but as a perspective-taking tool. When used thoughtfully, it supports the emotional labor of teaching and leadership–the part of our work that cannot be automated.

    From efficiency to empathy

    Schools do not thrive because we write faster emails or generate quicker lesson plans. They thrive because students feel known. Teachers feel supported. Families feel included.

    AI can assist with the operational tasks, but the real potential lies in the way it can help us:

    • Reflect on tone before hitting “send” on a difficult email
    • Understand how a message may land for someone under stress
    • Role-play sensitive conversations with students or staff
    • Anticipate barriers that multilingual families might face
    • Rehearse a restorative response rather than reacting in the moment

    These are human actions–ones that require situational awareness and empathy. AI can’t perform them for us, but it can help us practice and prepare for them.

    A middle school use case: Preparing for the hard conversations

    Middle school is an emotional ecosystem. Students are forming identity, navigating social pressures, and learning how to advocate for themselves. Staff are juggling instructional demands while building trust with young adolescents whose needs shift by the week.

    Some days, the work feels like equal parts counselor, coach, and crisis navigator.

    One of the ways I’ve leveraged AI is by simulating difficult conversations before they happen. For example:

    • A student is anxious about returning to class after an incident
    • A teacher feels unsupported and frustrated
    • A family is confused about a schedule change or intervention plan

    By giving the AI a brief description and asking it to take on the perspective of the other person, I can rehearse responses that center calm, clarity, and compassion.

    This has made me more intentional in real interactions–I’m less reactive, more prepared, and more attuned to the emotions beneath the surface.

    Empathy improves when we get to “practice” it.

    Supporting newcomers and multilingual learners

    Schools like mine welcome dozens of newcomers each year, many with interrupted formal education. They bring extraordinary resilience–and significant emotional and linguistic needs.

    AI tools can support staff in ways that deepen connection, not diminish it:

    • Drafting bilingual communication with a softer, more culturally responsive tone
    • Helping teachers anticipate trauma triggers based on student histories
    • Rewriting classroom expectations in family-friendly language
    • Generating gentle scripts for welcoming a student experiencing culture shock

    The technology is not a substitute for bilingual staff or cultural competence. But it can serve as a bridge–helping educators reach families and students with more warmth, clarity, and accuracy.

    When language becomes more accessible, relationships strengthen.

    AI as a mirror for leadership

    One unexpected benefit of AI is that it acts as a mirror. When I ask it to review the clarity of a communication, or identify potential ambiguities, it often highlights blind spots:

    • “This sentence may sound punitive.”
    • “This may be interpreted as dismissing the student’s perspective.”
    • “Consider acknowledging the parent’s concern earlier in the message.”

    These are the kinds of insights reflective leaders try to surface–but in the rush of a school day, they are easy to miss.

    AI doesn’t remove responsibility; it enhances accountability. It helps us lead with more emotional intelligence, not less.

    What this looks like in teacher practice

    For teachers, AI can support empathy in similarly grounded ways:

    1. Building more inclusive lessons

    Teachers can ask AI to scan a lesson for hidden barriers–assumptions about background knowledge, vocabulary loads, or unclear steps that could frustrate students.

    2. Rewriting directions for struggling learners

    A slight shift in wording can make all the difference for a student with anxiety or processing challenges.

    3. Anticipating misconceptions before they happen

    AI can run through multiple “student responses” so teachers can see where confusion might arise.

    4. Practicing restorative language

    Teachers can try out scripts for responding to behavioral issues in ways that preserve dignity and connection.

    These aren’t shortcuts. They’re tools that elevate the craft.

    Human connection is the point

    The heart of education is human. AI doesn’t change that–in fact, it makes it more obvious.

    When we reduce the cognitive load of planning, we free up space for attunement.
    When we rehearse hard conversations, we show up with more steadiness.
    When we write in more inclusive language, more families feel seen.
    When we reflect on our tone, we build trust.

    The goal isn’t to create AI-enhanced classrooms. It’s to create relationship-centered classrooms where AI quietly supports the skills that matter most: empathy, clarity, and connection.

    Schools don’t need more automation.

    They need more humanity–and AI, used wisely, can help us get there.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Teaching in the age of generative AI: why strategy matters more than tools

    Teaching in the age of generative AI: why strategy matters more than tools

    Join HEPI and Advance HE for a webinar today (Tuesday, 13 January 2026) from 11am to 12pm, exploring what higher education can learn from leadership approaches in other sectors. Sign up here to hear this and more from our speakers.

    This blog was kindly authored by Wioletta Nawrot, Associate Professor and Teaching & Learning Lead at ESCP Business School, London Campus.

    Generative AI has entered higher education faster than most institutions can respond. The question is no longer whether students and staff will use it, but whether universities can ensure it strengthens learning rather than weakens it. Used well, AI can support personalised feedback, stimulate creativity, and free academic time for deeper dialogue. Used poorly, it can erode critical thinking, distort assessment, and undermine trust.

    The difference lies not in the tools themselves but in how institutions guide their use through pedagogy, governance, and culture.

    AI is a cultural and pedagogical shift, not a software upgrade

    Across higher education, early responses to AI have often focused on tools. Yet treating AI as a bolt-on risks missing the real transformation: a shift in how academic communities think, learn, and make judgements.

    Some universities began with communities of practice rather than software procurement. At ESCP Business School, stakeholders, including staff and students, were invited to experiment with AI in teaching, assessment, and student support. These experiences demonstrated that experimentation is essential but only when it contributes to a coherent framework with shared principles and staff development.

    Three lessons have emerged as AI rollouts have been deployed. Staff report using AI to draft feedback or generate case study variations, but final decisions and marking remain human. Students learn more when they critique AI, not copy it. Exercises where students compare AI responses to academic sources or highlight errors can strengthen critical thinking. Governance matters more than enthusiasm. Clarity around data privacy, authorship, assessment and acceptable use is essential to protect trust.

    Assessment: the hardest and most urgent area of reform

    Once students can generate fluent essays or code in seconds, traditional take-home assignments are no longer reliable indicators of learning. At ESCP we have responded by: 

    • Introducing oral assessments, in-class writing, and step-by-step submissions to verify individual understanding.
    • Asking students to reference class materials and discussions, or unique datasets that AI tools cannot access.
    • Updating assessment rubrics to prioritise analytical depth, originality, transparency of process, and intellectual engagement.

    Students should be encouraged to state whether AI was used, how it contributed, and where its outputs were adapted or rejected. This mirrors professional practice by acknowledging assistance without outsourcing judgement. This shift moves universities from policing to encouraging by detecting misconduct and teaching responsible use.

    AI literacy and academic inequality

    AI does not benefit all students equally. Those with strong subject knowledge are better able to question AI’s inaccuracies; others may accept outputs uncritically. 

    Generic workshops alone are insufficient. AI literacy must be embedded within disciplines, for example, in law through case analysis; in business via ethical decision-making; and in science through data validation. Students can be taught not just how to use AI, but how to test it, challenge it, and cite it appropriately.

    Staff development is equally important. Not all academics feel confident incorporating AI into feedback, supervision or assessments. Models such as AI champions, peer-led workshops, and campus coordinators can increase confidence and avoid digital divides between departments.

    Policy implications for UK higher education

    If AI adoption remains fragmented, the UK’s higher education sector risks inconsistency, inequity, and reputational damage. A strategic approach is needed at an institutional and a national level. 

    Universities should define the educational purpose of AI before adopting tools, and consider reforming assessments to remain robust. Structured professional development, opportunities for peer exchange, and open dialogue with students about what constitutes legitimate and responsible use will also support the effective integration of AI into the sector.

    However, it’s not only institutions that need to take action. Policymakers and sector bodies should develop shared reference points for transparency and academic integrity. As a nation, we must invest in research into AI’s impact on learning outcomes and ensure quality frameworks reflect AI’s role in higher education processes, such as assessment and skills development.

    The European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) sets a prescriptive model for compliance in education. The UK’s principles-based approach gives universities flexibility, but this comes with accountability. Without shared standards, the sector risks inconsistent practice and erosion of public trust. A reduction in employability may also follow if students are not taught how to use AI ethically while continuing to develop their critical thinking and analytical skills.

    Implications for the sector

    The experience of institutions like ESCP Business School shows that the quality of teaching with AI depends less on the technology itself than on the judgement and educational purpose guiding its use. 

    Generative AI is already an integral part of students’ academic lives; higher education must now decide how to shape that reality. Institutions that approach AI through strategy, integrity, and shared responsibility will not only protect learning, but renew it, strengthening the human dimension that gives teaching its meaning.

    Source link

  • Generative AI and the REF: closing the gap between policy and practice

    Generative AI and the REF: closing the gap between policy and practice

    This blog was kindly authored by Liam Earney, Managing Director, HE and Research, Jisc.

    The REF-AI report, which received funding from Research England and co-authored by Jisc and Centre for Higher Education Transformations (CHET), was designed to provide evidence to help the sector prepare for the next REF. Its findings show that Generative AI is already shaping the approaches that universities adopt. Some approaches are cautious and exploratory, some are inventive and innovative, and most of it is happening quietly in the background. GenAI in research practice is no longer theoretical; it is part of the day-to-day reality of research, and research assessment.

    For Jisc, some of the findings in the report are unsurprising. We see every day how digital capability is uneven across the sector, and how new tools arrive before governance has had a chance to catch up. The report highlights an important gap between emerging practice and policy – a gap that the sector can now work collaboratively to close. UKRI has already issued guidance on generative AI use in funding applications and assessment: emphasising honesty, rigour, transparency, and confidentiality. Yet the REF context still lacks equivalent clarity, leaving institutions to interpret best practice alone. This work was funded by Research England to inform future guidance and support, ensuring that the sector has the evidence it needs to navigate GenAI responsibly.

    The REF-AI report rightly places integrity at the heart of its recommendations. Recommendation 1 is critical to support transparency and avoid misunderstandings: every university should publish a clear policy on using Generative AI in research, and specifically in REF work. That policy should outline what is acceptable and require staff to disclose when AI has helped shape a submission.

    This is about trust and about laying the groundwork for a fair assessment system. At present, too much GenAI use is happening under the radar, without shared language or common expectations. Clarity and consistency will help maintain trust in an exercise that underpins the distribution of public research funding.

    Unpicking a patchwork of inconsistencies

    We now have insight into real practice across UK universities. Some are already using GenAI to trawl for impact evidence, to help shape narratives, and even to review or score outputs. Others are experimenting with bespoke tools or home-grown systems designed to streamline their internal processes.

    This kind of activity is usually driven by good intentions. Teams are trying to cope with rising workloads and the increased complexity that comes with each REF cycle. But when different institutions use different tools in different ways, the result is not greater clarity. It is a patchwork of inconsistent practices and a risk that those involved do not clearly understand the role GenAI has played.

    The report notes that most universities still lack formal guidance and that internal policy discussions are only just beginning. In fact, practice has moved so far ahead of governance that many colleagues are unaware of how much GenAI is already embedded in their own institution’s REF preparation, or for professional services, how much GenAI is already being used by their researchers.

    The sector digital divide

    This is where the sector can work together, with support from Jisc and others, to help narrow the divide that exists. The survey results tell us that many academics are deeply sceptical of GenAI in almost every part of the REF. Strong disagreement is common and, in some areas, reaches seventy per cent or more. Only a small minority sees value in GenAI for developing impact case studies.

    In contrast, interviews with senior leaders reveal a growing sense that institutions cannot afford to ignore this technology. Several Pro Vice Chancellors told us that GenAI is here to stay and that the sector has a responsibility to work out how to use it safely and responsibly.

    This tension is familiar to Jisc. GenAI literacy is uneven, as is confidence, and even general digital capability. Our role is to help universities navigate that unevenness. In learning and teaching, this need is well understood, with our AI literacy programme for teaching staff well established. The REF AI findings make clear that similar support will be needed for research staff.

    Why national action matters

    If we leave GenAI use entirely to local experimentation, we will widen the digital divide between those who can invest in bespoke tools and those who cannot. The extent to which institutions can benefit from GenAI is tightly bound to their resources and existing expertise. A national research assessment exercise cannot afford to leave that unaddressed.

    We also need to address research integrity, and that should be the foundation for anything we do next. If the sector wants a safe and fair path forward, then transparency must come first. That is why Recommendation 1 matters. The report suggests universities should consider steps such as:

    • define where GenAI can and cannot be used
    • require disclosure of GenAI involvement in REF related work
    • embed these decisions into their broader research integrity and ethics frameworks

    As the report notes that current thinking about GenAI rarely connects with responsible research assessment initiatives such as DORA or CoARA, that gap has to close.

    Creating the conditions for innovation

    These steps do not limit innovation; they make innovation possible in a responsible way. At Jisc we already hear from institutions looking for advice on secure, trustworthy GenAI environments. They want support that will enable experimentation without compromising data protection, confidentiality or research ethics. They want clarity on how to balance efficiency gains with academic oversight. And they want to avoid replicating the mistakes of early digital adoption, where local solutions grew faster than shared standards.

    The REF AI report gives the sector the evidence it needs to move from informal practice to a clear, managed approach.

    The next REF will arrive at a time of major financial strain and major technological change. GenAI can help reduce burden and improve consistency, but only if it is used transparently and with a shared commitment to integrity. With the right safeguards, GenAI could support fairness in the assessment of UK research.

    From Jisc’s perspective, this is the moment to work together. Universities need policies. Panels need guidance. And the sector will need shared infrastructure that levels the field rather than widening existing gaps.

    Source link

  • How generative AI could re-shape professional services and graduate careers

    How generative AI could re-shape professional services and graduate careers

    Join HEPI and the University of Southampton for a webinar on Monday 10 November 2025 from 11am to 12pm to mark the launch of a new collection of essays, AI and the Future of Universities. Sign up now to hear our speakers explore the collection’s key themes and the urgent questions surrounding AI’s impact on higher education.

    This blog was kindly authored by Richard Brown, Associate Fellow at the University of London’s School of Advanced Study.

    Universities are on the front line of a new technological revolution. Generative AI (genAI) use (mainly large language mode-based chatbots like ChaptGPT and Claude) is almost universal among students. Plagiarism and accuracy are continuing challenges, and universities are considering how learning and assessment can respond positively to the daunting but uneven capabilities of these new technologies.

    How genAI is transforming professional services

    The world of work that students face after graduation is also being transformed. While it is unclear how much of the current slowdown in graduate recruitment can be attributed to current AI use, or uncertainty about its long-term impacts, it is likely that graduate careers will see great change as the technology develops. Surveys by McKinsey indicate that adoption of AI spread fastest between 2023/24 in media, communications, business, legal and professional services – the sectors with the highest proportions of graduates in their workforce (around 80 per cent in London and 60 per cent in the rest of the UK).

    ‘Human-centric’, a new report from the University of London looks at how AI is being adopted by professional service firms, and at what this might mean for the future shape and delivery of higher education.

    The report identifies how AI is being adopted both through grassroots initiatives and corporate action. In some firms, genAI is still the preserve of ‘secret cyborgs’ –  individual workers using chatbots under the radar. In others, task forces of younger workers have been deployed to find new uses for the tech to tackle chronic workflow problems or develop new services. Lawyers and accountants are codifying expertise into proprietary knowledge bases. These are private chatbots that minimise the risks of falsehood that still plague open systems, and offer potential to extend cheap professional-grade advice to many more people.

    Graduate careers re-thought

    What does this mean for graduate employment and skills? Many of the routine tasks frequently allocated to graduates can be automated through AI. This could be a doubled-edged sword. On the one hand, genAI may open up more varied and engaging ways for graduates to develop their skills, including the applied client-facing and problem-solving capabilities that  underpin professional practice.

    On the other hand, employers may question whether they need to employ as many graduates. Some of our interviewees talked of the potential for the ‘triangle’ structure of mass graduate recruitment being replaced by a ‘diamond-shaped’ refocus on mid-career hires. The obvious problem with this approach – of where mid-career hires will come from if there is no graduate recruitment – means that graduate recruitment is unlikely to dry up in the short term, but graduate careers may look very different as the knowledge economy is transformed.

    The agile university in an age of career turbulence

    This will have an impact on universities as well as employers. AI literacy, and the ability to use AI responsibly and authentically, are likely to become baseline expectations – suggesting that this should be core to university teaching and learning. Intriguingly, this is less about traditional computing skills and more about setting AI in context: research shows that software engineers were less in demand in early 2025 than AI ethicists and compliance specialists.

    Broader ‘soft’ skills (what a previous University of London / Demos report called GRASP skills – general, relational, analytic, social and personal) will remain in demand, particularly as critical judgement, empathy and the ability to work as a team remain human-centric specialities. Employers also said that, while deep domain knowledge was still needed to assess and interrogate AI outputs, they were also looking for employees with a broader understanding of issues such as cybersecurity, climate regulation and ESG (Environmental, Social, and Governance), who could work across diverse disciplines and perspectives to create new knowledge and applications.

    The shape of higher education may also need to change. Given the speed of advances in AI, it is likely that most propositions about which skills will be needed in the future may quickly become outdated (including this one). This will call for a more responsive and agile system, which can experiment with new course content and innovative teaching methods, while sustaining the rigour that underpins the value of their degrees and other qualifications.

    As the Lifelong Learning Entitlement is implemented, the relationship between students and universities may also need to become more long-term, rather than an intense three-year affair. Exposure to the world of work will be important too, but this needs to be open to all, not just to those with contacts and social capital.

    Longer term – beyond workplace skills?

    In the longer term, all bets are off, or at least pretty risky. Public concerns (over everything from privacy, to corporate control, to disinformation, to environmental impact) and regulatory pressures may slow the adoption of AI. Or AI may so radically transform our world that workplace skills are no longer such a central concern. Previous predictions of technology unlocking a more leisured world have not been realised, but maybe this time it will be different. If so, universities will not just be preparing students for the workplace, but also helping students to prepare for, shape and flourish in a radically transformed world.

    Source link

  • Teaching Alongside Generative AI for Student Success

    Teaching Alongside Generative AI for Student Success

    A growing share of colleges and universities are embedding artificial intelligence tools and AI literacy into the curriculum with the intent of aiding student success. A 2025 Inside Higher Ed survey of college provosts found that nearly 30 percent of respondents have reviewed curriculum to ensure that it will prepare students for AI in the workplace, and an additional 63 percent say they have plans to review curriculum for this purpose.

    Touro University in New York is one institution that’s incentivizing faculty to engage with AI tools, including embedding simulations into academic programs.

    In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with Shlomo Argamon, associate provost for artificial intelligence at Touro, to discuss the university policy for AI in the classroom, the need for faculty and staff development around AI, and the risks of gamification of education.

    An edited version of the podcast appears below.

    Q: How are you all at Touro thinking about AI? Where is AI integrated into your campus?

    Shlomo Argamon, associate provost for artificial intelligence at Touro University

    A: When we talk about the campus of Touro, we actually have 18 or 19 different campuses around the country and a couple even internationally. So we’re a very large and very diverse organization, which does affect how we think about AI and how we think about issues of the governance and development of our programs.

    That said, we think about AI primarily as a new kind of interactive technology, which is best seen as assistive to human endeavors. We want to teach our students both how to use AI effectively in what they do, how to understand and properly mitigate and deal with the risks of using AI improperly, but above all, to always think about AI in a human context.

    When we think about integrating AI for projects, initiatives, organizations, what have you, we need to first think about the human processes that are going to be supported by AI and then how AI can best support those processes while mitigating the inevitable risks. That’s really our guiding philosophy, and that’s true in all the ways we’re teaching students about AI, whether we’re teaching students specifically, deeply technical [subjects], preparing them for AI-centric careers or preparing them to use AI in whatever other careers they may pursue.

    Q: When it comes to teaching about AI, what is the commitment you all make to students? Is it something you see as a competency that all students need to gain or something that is decided by the faculty?

    A: We are implementing a combination—a top-down and a bottom-up approach.

    One thing that is very clear is that every discipline, and in fact, every course and faculty member, will have different needs and different constraints, as well as competencies around AI that are relevant to that particular field, to that particular topic. We also believe there’s nobody that knows the right way to teach about AI, or to implement AI, or to develop AI competencies in your students.

    We need to encourage and incentivize all our faculty to be as creative as possible in thinking about the right ways to teach their students about AI, how to use it, how not to use it, etc.

    So No. 1 is, we’re encouraging all of our faculty at all levels to be thinking and developing their own ideas about how to do this. That said, we also believe very firmly that all students, all of our graduates, need to have certain fundamental competencies in the area of AI. And the way that we’re doing this is by integrating AI throughout our general education curriculum for undergraduates.

    Ultimately, we believe that most, if not all, of our general education courses will include some sort of module about AI, teaching students specifically about the AI-relevant competencies that are relevant to those particular topics that they’re learning, whether it’s writing, reading skills, presentations, math, science, history, the different kinds of cognition and skills that you learn in different fields. What are the AI competencies that are relevant to that, and to have them learning that.

    So No. 1, they’re learning it not all at once. And also, very importantly, it’s not isolated from the topics, from the disciplines that they’re learning, but it’s integrated within them so that they see it as … part of writing is knowing how to use AI in writing and also knowing how not to. Part of learning history is knowing how to use AI for historical research and reasoning and knowing how not to use it, etc. So we’re integrating that within our general education curriculum.

    Beyond that, we also have specific courses in various AI skills, both at the undergraduate [and] at the graduate level, many of which are designed for nontechnical students to help them learn the skills that they need.

    Q: Because Touro is such a large university and it’s got graduate programs, online programs, undergraduate programs, I was really surprised that there is an institutional AI policy.

    A lot of colleges and universities have really grappled with, how do we institutionalize our approach to AI? And some leaders have kind of opted out of the conversation and said, “We’re going to leave it to the faculty.” I wonder if we could talk about the AI policy development and what role you played in that process, and how that’s the overarching, guiding vision when it comes to thinking about students using and engaging with AI?

    A: That’s a question that we have struggled with, as all academic leaders, as you mentioned, struggle with this very question.

    Our approach is to create policy at the institutional level that provides only the necessary guardrails and guidance that then enables each of our schools, departments and individual faculty members to implement the correct solutions for them in their particular areas, within this guidance and these guardrails so that it’s done safely and so that we know that it’s going, over all, in a positive and also institutionally consistent direction to some extent.

    In addition, one of the main functions of my office is to provide support to the schools, departments and especially the faculty members to make this transition and to develop what they need.

    It’s an enormous burden on faculty members to shift, not just to add AI content to their classes, if they do so, but to shift the way that we teach, the way that we do assessments. The way that we relate to our students, even, has to shift, to change, and it creates a burden on them.

    It’s a process to develop resources, to develop ways of doing this. I and the people that work in our office, we have regular office hours to talk to faculty, to work with them. One of the most important things that we do, and we spend a lot of time and effort on this, is training for our faculty, for our staff on AI, on using AI, on teaching about AI, on the risks of AI, on mitigating those risks, how to think about AI—all of these things. It all comes down to making sure that our faculty and staff, they are the university, and they’re the ones who are going to make all of this a success, and it’s up to us to give them the tools that they need to make this a success.

    I would say that while in many questions, there are no right or wrong answers, there are different perspectives and different opinions. I think that there is one right answer to “What does a university need to do institutionally to ensure success at dealing with the challenge of AI?” It’s to support and train the faculty and staff, who are the ones who are going to make whatever the university does a success or a failure.

    Q: Speaking of faculty, there was a university faculty innovation grant program that sponsored faculty to take on projects using AI in the classroom. Can you talk a little bit about that and how that’s been working on campus?

    A: We have an external donor who donated funds so that we were able to award nearly 100 faculty innovation challenge grants for developing methods of integrating AI into teaching.

    Faculty members applied and did development work over the summer, and they’re now implementing in their fall courses right now. We’re right now going through the initial set of faculty reports on their projects, and we have projects from all over the university in all different disciplines and many different approaches to looking at how to use AI.

    At the beginning of next spring, we’re going to have a conference workshop to bring everybody together so we can share all of the different ways that people try to do this. Some experiments, I’m sure, will not have worked, but that’s also incredibly important information, because what we’re seeking to do [is], we’re seeking to help our students, but we’re also seeking to learn what works, what doesn’t work and how to move forward.

    Again, this goes back to our philosophy that we want to unleash the expertise, intelligence, creativity of our faculty—not top down to say, “We have an AI initiatives. This is what you need to be doing”—but, instead, “Here’s something new. We’ll give you the tools, we’ll give you the support. We’ll give you the funding to make something happen, make interesting things happen, make good things for your students happen, and then let’s talk about it and see how it worked, and keep learning and keep growing.”

    Q: I was looking at the list of faculty innovation grants, and I saw that there were a few other simulations. There was one for educators helping with classroom simulations. There was one with patient interactions for medical training. It seems like there’s a lot of different AI simulations happening in different courses. I wonder if we can talk about the use of AI for experiential learning and why that’s such a benefit to students.

    A: Ever since there’s been education, there’s been this kind of distinction between book learning and real-world learning, experiential learning and so forth. There have always been those who have questioned the value of a college education because you’re just learning what’s in the books and you don’t really know how things really work, and that criticism has some validity.

    But what we’re trying to do and what AI allows us to do [is], it allows us and our students to have more and more varied experiences of the kinds of things they’re trying to learn and to practice what they’re doing, and then to get feedback on a much broader level than we could do before. Certainly, whenever you had a course in say, public speaking, students would get up, do some public speaking, get feedback and proceed. Now with AI, students can practice in their dorm rooms over and over and over again and get direct feedback; that feedback and those experiences can be made available then to the faculty member, who can then give the students more direct and more human or concentrated or expert feedback on their performance based on this, and it just scales.

    In the medical field, this is where it’s hugely, hugely important. There’s a long-standing institution in medical education called the standardized patient. Traditionally it’s a human actor who learns to act as a patient, and they’re given the profile of what disorders they’re supposed to have and how they’re supposed to act, and then students can practice, whether they’re diagnostic skills, whether they’re questions of student care and bedside manner, and then get expert feedback.

    We now have, to a large extent, AI systems that can do this, whether it’s interactive in a text-based simulation, voice-based simulation. We also have robotic mannequins that the students can work with that are AI-powered with AI doing conversation. Then they can be doing physical exams on the mannequins that are simulating different kinds of conditions, and again, this gives the possibility of really just scaling up this kind of experiential learning. Another kind of AI that has been found useful in a number of our programs, particularly in our business program, are AI systems that watch people give presentations and can give you real-time feedback, and that works quite well.

    Q: These are interesting initiatives, because it cuts out the middleman of needing a third party or maybe a peer to help the student practice the experience. But in some ways, does it gamify it too much? Is it too much like video games for students? How have you found that these are realistic enough to prepare students?

    A: That is indeed a risk, and one that we need to watch. As in nearly everything that we’re doing, there are risks that need to be managed and cannot be solved. We need to be constantly alert and watching for these risks and ensuring that we don’t overstep one boundary or another.

    When you talk about the gamification, or the video game nature of this, the artificial nature of it, there are really two pieces to it. One piece is the fact that there is no mannequin that exists, at least today, that can really simulate what it’s like to examine a human being and how the human being might react.

    AI chatbots, as good as they are, will not now and in the near, foreseeable future, at least, be able to simulate human interactions quite accurately. So there’s always going to be a gap. What we need to do, as with other kinds of education, you read a book, the book is not going to be perfect. Your understanding of the book is not going to be perfect. There has to be an iterative process of learning. We have to have more realistic simulations, different kinds of simulations, so the students can, in a sense, mentally triangulate their different experiences to learn to do things better. That’s one piece of it.

    The other piece, when you say gamification, there’s the risk that it turns into “I’m trying to do something to stimulate getting the reward or the response here or there.” And there’s a small but, I think, growing research literature on gamification of education, where if you gamify a little bit too much, it becomes more like a slot machine, and you’re learning to maneuver the machine to give you the dopamine hits or whatever, rather than really learning the content of what you’re doing. The only solution to that is for us to always be aware of what we’re doing and how it’s affecting our students and to adjust what we’re doing to avoid this risk.

    This goes back to one of the key points: Our whole philosophy of this is to always look at the technology and the tools, whether AI or anything else, as embedded within a larger human context. The key here is understanding when we implement some educational experience for students, whether it involves AI or technology or not, it’s always creating incentives for the students to behave in a certain way. What are those incentives, and are those incentives aligned with the educational objectives that we have for the students? That’s the question that we always need to be asking ourselves and also observing, because with AI, we don’t entirely know what those incentives are until we see what happens. So we’re constantly learning and trying to figure this out as we go.

    If I could just comment on that peer-to-peer simulation: Medical students poking each other or social work students interviewing each other for a social work kind of exam has another important learning component, because the student that is being operated upon is learning what it’s like to be in the other shoes, what it’s like to be the patient, what it’s like to be the object of investigation by the professional. And empathy is an incredibly important thing, and understanding what it’s like for them helps the students to learn, if done properly, to do it better and to have the appropriate sort of relationship with their patients.

    Q: You also mentioned these simulations give the faculty insight into how the student is performing. I wonder if we can talk about that; how is that real-time feedback helpful, not only for the student but for the professor?

    A: Now, one thing that needs to be said is that it’s very difficult, often, to understand where all of your students are in the learning process, what specifically they need. We can be deluged by data, if we so choose, that may confuse more than enlighten.

    That said, the data that come out of these systems can definitely be quite useful. One example is there are some writing assistance programs, Grammarly and their ilk, that can provide the exact provenance of writing assignments to the faculty, so it can show the faculty exactly how something was composed. Which parts did they write first? Which parts did they write second? Maybe they outlined it, then they revised this and they changed this, and then they cut and pasted it from somewhere else and then edited.

    All of those kinds of things that gives the faculty member much more detailed information about the student’s process, which can enable the faculty to give the students much more precise and useful feedback on their own learning. What do they perhaps need to be doing differently? What are they doing well? And so forth. Because then you’re not just looking at a final paper or even at a couple of drafts and trying to infer what the student was doing so that you can give them feedback, but you can actually see that more or less in real time.

    That’s the sort of thing where the data can be very useful. And again, I apologize if I sound like a broken record. It all goes back to the human aspect of this, and to use data that helps the faculty member to see the individual student with their own individual ways of thinking, ways of behaving, ways of incorporating knowledge, to be able to relate to them more as an individual.

    Briefly and parenthetically, one of the great hopes that we have for integrating AI into the educational process is that AI can help to take away many of the bureaucratic and other burdens that faculty are burdened with, and free them and enable them in different ways to enhance their human relationship with their students, so that we can get back to the core of education. Which really, I believe, is the transfer of knowledge and understanding through a human relationship between teacher and student.

    It’s not what might be termed the “jug metaphor” for education, where I, the faculty member, have a jug full of knowledge, and I’m going to pour it into your brain, but rather, I’m going to develop a relationship with you, and through this relationship, you are going to be transformed, in some sense.

    Q: This could be a whole other podcast topic, but I want to touch on this briefly. There is a risk sometimes when students are using AI-powered tools and faculty are using AI-powered tools that it is the AI engaging with itself and not necessarily the faculty with the students. When you talk about allowing AI to lift administrative burdens or ensure that faculty can connect with students, how can we make sure that it’s not robot to robot but really person to person?

    A: That’s a huge and a very important topic, and one which I wish that I had a straightforward and direct and simple answer for. This is one of those risks that has to be mitigated and managed actively and continually.

    One of the things that we emphasize in all our trainings for faculty and staff and all our educational modules for students about AI is the importance of the AI assisting you, rather than you assisting the AI. If the AI produces some content for you, it has to be within a process in which you’re not just reviewing it for correctness, but you’re producing the content where it’s helping you to do so in some sense.

    That’s a little bit vague, because it plays out differently in different situations, and that’s the case for faculty members who are producing a syllabus or using AI to produce other content for the courses to make sure that it’s content that they are producing with AI. Same thing for the students using AI.

    For example, our institutional AI policy having to do with academic honesty and integrity, is, I believe, groundbreaking in the sense that our default policy for courses that don’t have a specific policy regarding the use of AI in that course—by next spring, all courses must have a specific policy—is that AI is allowed to be used by students for a very wide variety of tasks on their assignments.

    You can’t use AI to simply do your assignment for you. That is forbidden. The key is the work has to be the work of the student, but AI can be used to assist. Through establishing this as a default policy—which faculty, department chairs, deans have wide latitude to define more or less restrictive policies with specific carve-outs, simply because every field is different and the needs are different—the default and the basic attitude is, AI is a tool. You need to learn to use it well and responsibly, whatever you do.

    Q: I wanted to talk about the future of AI at the university. Are there any new initiatives you should tell our listeners about? How are you all thinking about continuing to develop AI as a teaching and learning tool?

    A: It’s hard for me to talk about specific initiatives, because what we’re doing is we believe that it’s AI within higher education particularly, but I think in general as well, it’s fundamentally a start-up economy in the sense that nobody, and I mean nobody, knows what to do with it, how to deal with it, how does it work? How does it not work?

    Therefore, our attitude is that we want to have it run as many experiments as we can, to try as many different things as we can, different ways of teaching students, different ways of using AI to teach. Whether it’s through simulations, content creation, some sort of AI teaching assistants working with faculty members, whether it’s faculty members coming up with very creative assignments for students that enable them to learn the subject matter more deeply by AI assisting them to do very difficult tasks, perhaps, or tasks that require great creativity, or something like that.

    The sky is the limit, and we want all of our faculty to experiment and develop. We’re seeking to create that within the institution. Touro is a wonderful institution for that, because we already have the basic institutional culture for this, to have an entrepreneurial culture within the university. So the university as a whole is an entrepreneurial ecosystem for experimenting and developing ways of teaching about and with and through AI.

    Source link

  • Pause for REFlection: Time to review the role of generative AI in REF2029

    Pause for REFlection: Time to review the role of generative AI in REF2029

    Author:
    Nick Hillman

    Published:

    • This blog has been kindly written for HEPI by Richard Watermeyer (Professor of Higher Education and Co-Director of the Centre for Higher Education at the University of Bristol), Tom Crick (Professor of Digital Policy at Swansea University) and Lawrie Phipps (Professor of Digital Leadership at the University of Chester and Senior Research Lead at Jisc).
    • On Tuesday, HEPI and Cambridge University Press & Assessment will be hosting the UK launch of the OECD’s Education at a Glance. On Wednesday, we will be hosting a webinar on students’ cost of living with TechnologyOne – for more information on booking a free place, see here.

    For as long as there has been national research assessment exercises (REF, RAE or otherwise), there have been efforts to improve the way with which research is evaluated and Quality Related (QR) research funding consequently distributed. Where REF2014 stands out for its introduction of impact as a measure of what counts as research excellence, for REF2029, it has been all about research culture. Though where impact has become an integral dimension of the REF, the installation of research culture (into a far weightier environment or as has been proposed People, Culture and Environment (PCE) statement) as a criterion of excellence appears far less assured, especially when set against a three-month extension to REF2029 plans. 

    A temporary pause on proceedings has been announced by Sir Patrick Vallance, the UK Government’s Minister for Science, as a means to ensure that the REF provides ‘a credible assessment of quality’. The corollary of such is that the hitherto proposed formula (many parts of which remain formally undeclared – much to the frustration of universities’ REF personnel and indeed researchers) is not quite fit for purpose, and certainly not so if the REF is to ‘support the government’s economic and social missions’. Thus, it may transpire that research culture is ultimately downplayed or omitted from the REF. For some, this volte face, if it materialises, may be greeted with relief; a pragmatic step-back from the jaws of an accountability regime that has become excessively complex, costly and inefficient (if not even estranged from the core business of evaluating and then funding so-called ‘excellent’ research) and despite proclamations at the conclusion of its every instalment, that next time it will be less burdensome.   

    While the potential backtrack on research culture and potential abandonment of PCE statements will be focused on to explain the REF’s most recent hiatus, these may be only cameos to discussion of its wider credibility and utility; a discussion which appears to be reaching apotheosis, not least given the financial difficulties endemic to the UK sector, which the REF, with its substantial cost, is counted as further exacerbating. Moreover, as we are finding in our current research, the REF may have entered a period not limited to incremental reform and tinkering at the edges but wholesale revision; and this as a consequence of higher education’s seemingly unstoppable colonisation by artificial intelligence. 

    With recent funding from Research England, we have undertaken to consult with research leaders and specialist REF personnel embedded across 17 UK HEIs – including large, research-intensive institutions and those historically with a more modest REF footprint, to gain an understanding of existing views of and practices in the adoption of generative AI tools for REF purposes. While our study has thrown up multiple views as to the utility and efficacy of using generative AI tools for REF purposes, it has nonetheless revealed broad consensus that the REF will inevitably become more AI-infused and enabled, if not ultimately, if it is to survive, entirely automated. The use of generative AI for purposes of narrative generation, evidence reconnaissance, and scoring of core REF components (research outputs and impact case studies) have all been mooted as potential applications with significant cost and labour-saving affordances and applications which might also get closer to ongoing, real-time assessments of research quality, unrestricted to seven-year assessment cycles. Yet the use of generative AI has also been (often strongly) cautioned against for the myriad ways with which it is implicated and engendered with bias and inaccuracy (as a ‘black box’ tool) and can itself be gamed in multiple ways, for instance in ‘adversarial white text’. This is coupled with wider ongoing scientific and technical considerations regarding transparency, provenance and reproducibility. Some even interpret its use as antithetical to the terms of responsible research evaluation set out by collectives like CoARA and COPE.

    Notwithstanding, such various objections, we are witnessing these tools being used extensively (if in many settings tacitly and tentatively) by academics and professional services staff involved in REF preparations. We are also being presented with a view that the use of GenAI tools by REF panels in four years’ time is a fait accompli, especially given the speed by which the tools are being innovated. It may even be that GenAI tools could be purposed in ways that circumvent the challenges of human judgement, the current pause intimates, in the evaluation of research culture. Moreover, if the credibility and integrity of the REF ultimately rests in its capacity to demonstrate excellence via alignment with Government missions (particularly ‘R&D for growth’), then we are already seeing evidence of how AI technologies can achieve this.

    While arguments have been previously made that the REF offers good value for (public) money, the immediate joint contexts of severe financial hardship for the sector; ambivalence as to the organisational credibility of the REF as currently proposed; and the attractiveness of AI solutions may produce a new calculation. This is a calculation, however, which the sector must own, and transparently and honestly. It should not be wholly outsourced, and especially not to one of a small number of dominant technology vendors. A period of review must attend not only to the constituent parts of the REF but how these are actioned and responded to. A guidebook for GenAI use in the REF is exigent and this must place consistent practice at its heart. The current and likely escalating impact of Generative AI on the REF cannot be overlooked if it is to be claimed as a credible assessment of quality. The question then remains: is three months enough? 

    Notes

    • The REF-AI study is due to report in January 2026. It is a research collaboration between the universities of Bristol and Swansea and Jisc.
    • With generous thanks to Professor Huw Morris (UCL IoE) for his input into earlier drafts of this article.

    Source link

  • We Can’t Ban Generative AI but We Can Friction Fix It (opinion)

    We Can’t Ban Generative AI but We Can Friction Fix It (opinion)

    As the writing across the curriculum and writing center coordinator on my campus, faculty ask me how to detect their students’ use of generative AI and how to prevent it. My response to both questions is that we can’t.

    In fact, it’s becoming increasingly hard to not use generative AI. Back in 2023, according to a student survey conducted on my campus, some students were nervous to even create ChatGPT accounts for fear of being lured into cheating.  It used to be that a student had to seek it out, create an account and feed it a prompt. Now that generative AI is integrated into programs we already use—Word (Copilot), Google Docs (Gemini) and Grammarly—it’s there beckoning us like the chocolate stashed in my cupboard does around 9 p.m. every night.

    A recent GrammarlyGO advertisement emphasizes the seamless integration of generative AI. In the first 25 seconds of this GrammarlyGO ad, a woman’s confident voice tells us that GrammarlyGO is “easy to use” and that it’s “easy to write better and faster” with just “one download” and the “click of a button.” The ad also seeks to remove any concerns about generative AI’s nonhumanness and detectability: it’s “personalized to you”; “understands your style, voice and intent so your writing doesn’t sound like a robot”; and is “custom-made.” “You’re in control,” and “GrammarlyGO helps you be the best version of yourself.”  The message: Using GrammarlyGO’s generative AI to write is not cheating, it’s self-improvement. 

    This ad calls to my mind the articles we see every January targeting those of us who want to develop healthy habits. The ones that urge us to sleep in our gym clothes if we want to start a morning workout routine. If we sleep in our clothes, we’ll reduce obstacles to going to the gym. Some of the most popular self-help advice focuses on the role of reducing friction to enable us to build habits that we want to build. Like the self-help gurus, GrammarlyGO—and all generative AI companies—are strategically seeking to reduce friction by reducing time (“faster), distance (it’s “where you write”) and effort (it’s “easy”!). 

    Where does this leave us? Do we stop assigning writing? Do we assign in-class writing tests? Do we start grading AI-produced assignments by providing AI-produced feedback? 

    Nope. 

    If we recognize the value of writing as a mode of thinking and believe that effective writing requires revision, we will continue to assign writing. While there is a temptation to shift to off-line, in-class timed writing tests, this removes the opportunity for practicing revision strategies and disproportionately harms students with learning disabilities, as well as English language learners.  

    Instead, like Grammarly, we can tap into what the self-help people champion and engage in what organizational behavior researchers Hayagreeva Rao and Robert I. Sutton call “friction fixing.” In The Friction Project (St. Martin’s Press, 2024), they explain how to “think and live like a friction fixer who makes the right things easier and the wrong things harder.” We can’t ban AI, but we can friction fix by making generative AI harder to use and by making it easier to engage in our writing assignments. This does not mean making our writing assignments easier! The good news is that this approach draws on practices already central to effective writing instruction. 

    After 25 years of working in writing centers at three institutions, I’ve witnessed what stalls students, and it is rarely a lack of motivation. The students who use the writing center are invested in their work, but many can’t start or get stuck. Here are two ways we can decrease friction for writing assignments: 

    1. Break research projects into steps and include interim deadlines, conferences and feedback from you or peers. Note that the feedback doesn’t have to be on full drafts but can be on short pieces, such as paragraph-long project proposals (identify a problem, research question and what is gained if we answer this research question). 
    1. Provide students with time to start on writing projects in class. Have you ever distributed a writing assignment, asked, “any questions?” and been met with crickets? If we give students time to start writing in class, we or peers can answer questions that arise, leaving students to feel more confident that they are going in the right direction and hopefully less likely to turn to AI.

    There are so many ways we faculty (unintentionally) make our assignments uninviting: the barrage of words on a page, the lack of white space, our practice of leading with requirements (citation style, grammatical correctness), the use of SAT words or discipline-specific vocabulary for nonmajors: All this can signal to students that they don’t belong even before they’ve gotten started. Sometimes, our assignment prompts can even sound annoyed, as our frustration with past students is misdirected toward current students and manifests as a long list of don’ts. The vibe is that of an angry Post-it note left for a roommate or partner who left their dishes in the sink … again!

    What if we were to reconceive our assignments as invitations to a party instead?  When we design a party invitation, we have particular goals: We want people to show up, to leave their comfort zones and to be open to engaging with other people. Isn’t that what we want from our students when we assign a writing project? 

    If we designed writing assignments as invitations rather than assessments, we would make them visually appealing and use welcoming language.  Instead of barraging students with all the requirements, we would foreground the enticing facets of the assignment. De-emphasize APA and MLA formatting and grammatical correctness and emphasize the purpose of the assignment. The Transparency in Learning and Teaching in Higher Education framework is useful for improving assignment layout. 

    Further, we can invite students to write for real-world audiences and wrestle with what John C. Bean calls “beautiful problems.” As Bean and Dan Melzer’s Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom (Wiley, 2021) emphasizes, problems are naturally motivating. From my 25 years of experience teaching writing, students are motivated to write when they:

    • write about issues they care about;
    • write in authentic genres and for real-world audiences;
    • share their writing in and beyond the classroom;
    • receive feedback on drafts from their professors and peers that builds on their strengths and provides specific tasks for how to improve their pieces; and
    • understand the usefulness of a writing project in relation to their future goals. 

    Much of this is confirmed by a three-year study conducted at three institutions that asked seniors to describe a meaningful writing project. If assignments are inviting and meaningful, students are more likely to do the hard work of learning and writing. In short, we can decrease friction preventing engagement with our assignments by making them sound inviting, by using language and layouts that take our audience into consideration, and by designing assignments that are not just assessments but opportunities to explore or communicate. 

    How then do we create friction when it comes to using generative AI? As a writing instructor, I truly believe in the power of writing to figure out what I think and to push myself toward new insights. Of course, this is not a new idea. Toni Morrison explains, “Writing is really a way of thinking—not just feeling but thinking about things that are disparate, unresolved, mysterious, problematic or just sweet.” If we can get students to truly believe this by assigning regular low-stakes writing and reinforcing this practice, we can help students see the limits of outsourcing their thinking to generative AI. 

    As generative AI emerged, I realized that even though my writing courses are designed to promote writing to think, I don’t explicitly emphasize the value of writing as mode of discovery, so I have rewritten all my freewrite prompts so that I drive this point home: “This is low-stakes writing, so don’t worry about sentence structure or grammar. Feel free to write in your native language, use bullet points, or speech to text. The purpose of this freewriting is to give you an opportunity to pause and reflect, make new connections, uncover a new layer of the issue, or learn something you didn’t know about yourself.” And one of my favorite comments to give on a good piece of writing is “I enjoy seeing your mind at work on the page here.” 

    Additionally, we can create friction by getting to know our students and their writing. We can get to know their writing by collecting ungraded, in-class writing at the beginning of the semester. We can get to know our students by canceling class to hold short one-on-one or small group conferences. If we have strong relationships with students, they are less likely to cheat intentionally. We can build these bonds by sharing a video about ourselves, writing introductory letters, sharing our relevant experiences and failures, writing conversational feedback on student writing, and using alternative grading approaches that enable us to prioritize process above product. 

    There are no “AI-proof” assignments, but we can also create friction by assigning writing projects that don’t enable students to rely solely on generative AI, such as zines, class discussions about an article or book chapter, or presentations: Generative AI can design the slides and write the script, but it can’t present the material in class. Require students to include interactive components to their presentations so that they engage with their audiences. For example, a group of my first-year students gave a presentation on a selection from Jonathan Haidt’s The Anxious Generation, and they asked their peers to check their phones for their daily usage  report and to respond to an anonymous survey.

    Another group created a game, asking the class to guess which books from a display had been banned at one point or another. We can assign group projects and give students time to work on these projects in class; presumably, students will be less likely to misuse generative AI if they feel accountable in some way to their group. We can do a demonstration for students by putting our own prompts through generative AI and asking students to critique the outputs. This has the two-pronged benefit of demonstrating to students that we are savvy while helping them see the limitations of generative AI. 

    Showing students generative AI’s limitations and the harm it causes will also help create friction. Generative AI’s tendency to hallucinate makes it a poor tool for research; its confident tone paired with its inaccuracy has earned it the nickname “bullshit machine.” Worse still are the environmental costs, the exploitation of workers, the copyright infringement, the privacy concerns, the explicit and implicit biases, the proliferation of mis/disinformation, and more. Students should be given the opportunity to research these issues for themselves so that they can make informed decisions about how they will use generative AI. Recently, I dedicated one hour of class time for students to work in groups researching these issues and then present what they found to the class. The students were especially galled by the privacy violations, the environmental impact and the use of writers’ and artists’ work without permission or compensation. 

    When we focus on catching students who use generative AI or banning it, we miss an opportunity to teach students to think critically, we signal to students that we don’t trust them and we diminish our own trustworthiness.  If we do some friction fixing instead, we can support students as they work to become nimble communicators and critical users of new technologies.

    Catherine Savini is the Writing Across the Curriculum coordinator, Reading and Writing Center coordinator, and a professor of English at Westfield State University. She enjoys designing and leading workshops for high school and university educators on writing pedagogy.

    Source link

  • It’s time we moved the generative AI conversation on

    It’s time we moved the generative AI conversation on

    • By Michael Grove, Professor of Mathematics and Mathematics Education and Deputy Pro-Vice-Chancellor (Education Policy and Academic Standards) at the University of Birmingham.

    We are well beyond the tipping point. Students are using generative AI – at scale. According to HEPI’s Student Generative AI Survey 2025, 92% of undergraduates report using AI tools, and 88% say they’ve used them in assessments. Yet only a third say their institution has supported them to use these tools well. For many, the message appears to be: “you’re on your own”.

    The sector’s focus has largely been on mitigating risk: rewriting assessment guidance, updating misconduct policies, and publishing tool-specific statements. These are necessary steps, but alone they’re not enough.

    Students use generative AI not to cheat, but to learn. But this use is uneven. Some know how to prompt effectively, evaluate outputs, and integrate AI into their learning with confidence and control. Others don’t. Confidence, access, and prior exposure all vary, by discipline, gender, and background. If left unaddressed, these disparities risk becoming embedded. The answer is not restriction, but thoughtful design that helps all students develop the skills to use AI critically, ethically, and with growing independence.

    If generative AI is already reshaping how students learn, we must design for that reality and start treating it as a literacy to be developed. This means moving beyond module-level inconsistency and toward programme-level curriculum thinking. Not everywhere, not all at once – but with intent, clarity, and care.

    We need programme-level thinking, not piecemeal policy

    Most universities now have institutional policies on AI use, and many have updated assessment regulations. But module-by-module variation remains the norm. Students report receiving mixed messages – encouraged to use AI in one context, forbidden in another, ignored in a third, and unsure in a fourth. This inconsistency leads to uncertainty and undermines both engagement and academic integrity.

    A more sustainable approach requires programme-level design. This means mapping where and how generative AI is used across a degree, setting consistent expectations and providing scaffolded opportunities for students to understand how these tools work, including how to use them ethically and responsibly. One practical method is to adopt a traffic light’ or five-level framework to indicate what kinds of AI use are acceptable for each assessment – for example, preparing, editing, or co-creating content. These frameworks need not be rigid, but they must be clear and transparent for all.

    Such frameworks can provide consistency, but they are no silver bullet. In practice, students may interpret guidance differently or misjudge the boundaries between levels. A traffic-light system risks oversimplifying a complex space, particularly when ‘amber’ spans such a broad and subjective spectrum. Though helpful for transparency, they cannot reliably show whether guidance has been followed. Their value lies in prompting discussion and supporting reflective use.

    Design matters more than detection

    Rather than relying on unreliable detection tools or vague prohibitions, we must design assessments and learning experiences that either incorporate AI intentionally or make its misuse educationally irrelevant.

    This doesn’t mean lowering standards. It means doubling down on what matters in a higher education learning experience: critical thinking, explanation, problem-solving, and the ability to apply knowledge in unfamiliar contexts. In my own discipline of mathematics, students might critique AI-generated proofs, identify errors, or reflect on how AI tools influenced their thinking. In other disciplines, students might compare AI outputs with academic sources, or use AI to explore ideas before developing their own arguments.

    We must also protect space for unaided work. One model is to designate a proportion of each programme as ‘Assured’ – learning and assessment designed to demonstrate independent capability, through in-person, oral, or carefully structured formats. While some may raise concerns that this conflicts with the sector’s move toward more authentic, applied assessment, these approaches are not mutually exclusive. The challenge is to balance assured tasks with more flexible, creative, or AI-enabled formats. The rest of the curriculum can then be ‘Exploratory’, allowing students to explore AI more openly, and in doing so, broaden their skills and graduate attributes.

    Curriculum design should reflect disciplinary values

    Not all uses of AI are appropriate for all subjects. In mathematics, symbolic reasoning and proof can’t simply be outsourced. But that should not mean AI has no role. It can help students build glossaries, explore variants of standard problems, or compare different solution strategies. It can provoke discussion, encourage more interactive forms of learning, and surface misconceptions.

    These are not abstract concerns; they are design-led questions. Every discipline must ask:

    • What kind of skills, thinking and communication do we value?
    • How might AI support, or undermine, those aims?
    • How can we help students understand the difference?

    These reflections play out differently across subject areas. As recent contributions by Nick Hillman  and Josh Freeman underline, generative AI is prompting us to reconsider not just how students learn, but what now actually counts as knowledge, memory, or understanding.

    Without a design-led approach, AI use will default to convenience, putting the depth, rigour, and authenticity of the higher education learning experience at risk for all.

    Students need to be partners in shaping this future. Many already have deep, practical experience with generative AI and can offer valuable insight into how these tools support, or disrupt, real learning. Involving students in curriculum design, guidance, and assessment policy will help ensure our responses are relevant, authentic, and grounded in the realities of how they now learn.

    A call to action

    The presence of generative AI in higher education is not a future scenario, it is the present reality. Students are already using these tools, for better and for worse. If we leave them to navigate this alone, we risk widening divides, losing trust, and missing the opportunity to improve how we teach, assess, and support student learning.

    What’s needed now is a shift in narrative:

    • From panic to pedagogy
    • From detection to design
    • From institutional policy to consistent programme-level practice.

    Generative AI won’t replace teaching. But it will reshape how students learn. It’s now time we help them do so with confidence and purpose, through thoughtful programme-level design.

    Source link

  • How Students Use Generative AI Beyond Writing – Faculty Focus

    How Students Use Generative AI Beyond Writing – Faculty Focus

    Source link