Tag: thought

  • AI in Higher Education: Academic Thought Leadership

    AI in Higher Education: Academic Thought Leadership

    How Faculty Expertise Boosts AI Search Results in Higher Ed

    Many higher education enrollment teams assume that the key to growth is spending more on paid leads. It feels logical: increase visibility, boost inquiries, fill the pipeline. Yet, too often, they end up paying for quantity, not quality — resulting in higher budgets that fail to yield students who are a good fit. They see short-term spikes in inquiries, followed by low conversion rates and retention challenges from mismatched students. 

    Achieving sustainable enrollment growth doesn’t have to mean spending more. What’s needed instead is smarter strategies that enable institutions to attract the right students earlier in the decision process — when they’re still exploring their options, defining their goals, and forming impressions of institutions’ credibility. 

    Shifting the focus of enrollment strategies from paid acquisition to earned attention — building organic visibility, authority, and trust with prospects before they fill out an application form — is the key to true growth. This approach is increasingly important as more and more students use artificial intelligence (AI) to navigate their higher education journey. 

    Building Organic Demand with AI and GEO 

    AI is reshaping how students discover institutions and their programs. While Google used to dominate prospective students’ search efforts, they are increasingly using AI-powered search assistants such as ChatGPT and Gemini to find, summarize, and compare higher education offerings. A 2025 study by the Online and Professional Education Association (UPCEA) found that roughly 50% of prospective students use AI tools at least weekly to research programs, including about 24% who use them on a daily basis. 

    As AI’s role in higher education marketing expands, institutions have begun to adopt generative engine optimization (GEO) strategies to improve their visibility in AI-driven search results. Unlike standard search engine optimization (SEO) — which focuses on keywords and backlinks — GEO prioritizes structured, authoritative content that AI systems can easily understand, cite, and incorporate into their responses. 

    When institutions feed these systems with content featuring faculty-driven subject matter expertise and clearly structured information, they train the AI algorithms to view them as authoritative and credible, and to surface them in students’ search results more often. This makes it easier for these institutions to engage high-intent students earlier in their enrollment journey.

    The Role of Faculty in Building Authority 

    No one conveys academic quality and institutional credibility better than the people who embody them. Faculty members represent some of an institution’s most trusted — yet often underutilized — marketing assets. Their expertise not only validates the institution and its programs but also humanizes them. 

    When faculty voices appear in thought leadership articles, Q&A features, or explainer videos, they do more than share knowledge — they strengthen confidence in the institution among both prospective students and their families. 

    Leveraged strategically, faculty expertise can enhance multiple facets of an institution’s marketing ecosystem:

    • Public relations: Faculty insights can position schools as trusted commentators in media coverage on industry trends. 
    • Search: Content that highlights subject matter expertise is seen as more credible by both traditional search engines and AI assistants, improving the content’s organic rankings and GEO performance. 
    • Enrollment marketing: Faculty-driven content that targets prospective students — such as video Q&As, informative blog posts, and interactive webinars — can help bridge the gap for these prospects between aspiration and application.

    When institutions center faculty in their marketing efforts, they connect academic storytelling with enrollment strategy, transforming their outreach from promotion into education.  

    Improving Efficiency and Results

    Today, higher ed enrollment growth depends on smarter strategy — not higher spending. Institutions can achieve greater success by balancing their paid and organic channels, building durable content engines, and aligning their marketing spend with actual enrollment outcomes.

    Balance Paid and Organic Marketing               

    Paid campaigns still have great value. But overreliance on them can drive up cost-per-enrollment (CPE) while producing prospective students who are a weaker fit. According to data from UPCEA, the average cost per enrolled student is more than $2,800. By mixing organic channels — faculty thought leadership pieces, GEO-friendly content — with paid efforts, institutions can achieve lower long-term costs while improving the fit and retention of their prospects.

    Create a Long-Term Content Engine               

    Temporary campaigns can deliver short-term boosts, but real authority that leads to sustainable enrollment growth stems from consistent, faculty-led content. Building a content engine anchored in faculty expertise and optimized for AI and GEO is essential, allowing institutions to maintain their visibility and credibility. Over time, this strategy can lower acquisition costs, boost engagement, and support retention.

    Align Marketing Spend With Enrollment Outcomes               

    Too often, marketing dollars are funneled toward maximizing the volume of leads rather than focusing on actual outcomes. True budget efficiency comes from aligning spend with each stage of the student life cycle — supporting strategies that move prospects from application to enrollment to persistence. When institutions’ budgets prioritize quality, engagement, and long-term fit over volume, they can strengthen both their conversion rates and their retention outcomes. 

    Key Takeaways

    • More leads don’t always translate to real growth. Sustainable enrollment comes from reaching the right students — not just more of them. 
    • By embracing GEO, leveraging AI in their higher education marketing strategy, and elevating faculty expertise, institutions can deliver content that builds organic authority and attracts qualified prospects earlier in their decision journey. This approach reduces institutions’ reliance on paid efforts, improves their cost efficiency, and enhances their credibility. 
    • Schools that invest in faculty-led content strategies can gain stronger conversions, better retention, and enduring brand trust — the foundation of meaningful, measurable enrollment growth.

    Drive Enrollment With Faculty Voices

    At Archer Education, we partner with accredited institutions to help them leverage AI and faculty thought leadership to build their credibility and drive their enrollment growth. Contact our team to learn how our tech-enabled marketing and enrollment solutions can help your institution attract the right students more efficiently. 

    Sources

    Association to Advance Collegiate Schools of Business, “Greater Impact Through Faculty Thought Leadership”

    Online and Professional Education Association, “AI Tools Are Driving Prospective Student Decisions, UPCEA and Search Influence Research Shows”

    Online and Professional Education Association, “How Higher Education Marketing Metrics Help You Boost Enrollment”

    Source link

  • Regurgitative AI: Why ChatGPT Won’t Kill Original Thought – Faculty Focus

    Regurgitative AI: Why ChatGPT Won’t Kill Original Thought – Faculty Focus

    Source link

  • Do we still value original thought?

    Do we still value original thought?

    I have written the piece that you are now reading. But in the world of AI, what exactly does it mean to say that I’ve written it? 

    As someone who has either written or edited millions of words in my life, this question seems very important. 

    There are plenty of AI aids available to help me in my task. In fact, some are insinuating themselves into our everyday work without our explicit consent. For example, Microsoft inserted a ‘Copilot’ into Word, the programme I’m using. But I have disabled it. 

    I could also insert prompts into a service such as ChatGPT and ask it to write the piece itself. Or I could ask the chatbot direct questions and paste in the answers. Everybody who first encounters these services is amazed by what they can do. The ability to synthesise facts, arguments and ideas and express them in a desired style is truly extraordinary. So it’s possible that using chatbots would make my article more readable, or accurate or interesting.

    But in all these cases, I would be using, or perhaps paraphrasing, text that had been generated by a computer. And in my opinion, this would mean that I could no longer say that I had written it. And if that were the case, what would be the point of ‘writing’ the article and putting my name on it?

    Artificial intelligence is a real asset.

    There is no doubt that we benefit from AI, whether it is in faster access to information and services, safer transport, easier navigation, diagnostics and so on. 

    Rather than a revolution, the ever-increasing automation of human tasks seems a natural extension of the expansion of computing power that has been under way since the Second World War. Computers crunch data, find patterns and generate results that simulate those patterns. In general, this saves time and effort and enhances our lives.

    So at what point does the use of AI become worrying? To me, the answer is in the generation of content that purports to be created by specific humans but is in fact not. 

    The world of education is grappling with this issue. AI gathers information, orders and analyses it, and is able to answer questions about it, whether in papers or other ways. In other words, all the tasks that a student is supposed to perform! 

    At the simplest level, students can ask a computer to do the work and submit it as their own. Schools and universities have means to detect this, but there are also ways to avoid detection. 

    The human touch

    From my limited knowledge, text produced with the help of AI can seem sterile, distanced from both the ‘writer’ and the topic. In a word, dehumanised. And this is not surprising, because it is written by a robot. How is a teacher to grade a paper that seems to have been produced in this way?

    There is no point in moralising about this. The technologies cannot be un-invented. In fact, tech companies are investing hundreds of billions of dollars in vast amounts of additional computing power that will make robots ever more present in our lives. 

    So schools and universities will have to adjust. Some of the university websites that I’ve looked at are struggling to produce straightforward, coherent guidance for students. 

    The aim must be, on the one hand, to enable students to use all the available technologies to do their research, whether the goal is to write a first-year paper or a PhD thesis, and on the other hand to use their own brains to absorb and order their research, and to express their own analysis of it. They need to be able to think for themselves. 

    Methods to prove that they can do this might be to have hand-written exams, or to test them in viva voce interviews. Clearly, these would work for many students and many subjects, but not for all. On the assumption that all students are going to use AI for some of their tasks, the onus is on educational establishments to find new ways to make sure that students can absorb information and express their analysis on their own.

    Can bots break a news story?

    If schools and universities can’t do that, there would be no point in going to university at all. Obtaining a degree would have no meaning and people would be emerging from education without having learned how to use their brains.

    Another controversial area is my own former profession, journalism. Computers have subsumed many of the crafts that used to be involved in creating a newspaper. They can make the layouts, customise outputs, match images to content, and so on. 

    But only a human can spot what might be a hot political story, or describe the situation on the ground in Ukraine.  

    Journalists are right to be using AI for many purposes, for example to discover stories by analysing large sets of data. Meanwhile, more menial jobs involving statistics, such as writing up companies’ financial results and reporting on sports events, could be delegated to computers. But these stories might be boring and could miss newsworthy aspects, as well as the context and the atmosphere. Plus, does anybody actually want to read a story written by a robot? 

    Just like universities, serious media organisations are busy evolving AI policies so as to maintain a competitive edge and inform and entertain their target audiences, while ensuring credibility and transparency. This is all the more important when the dissemination of lies and fake images is so easy and prevalent. 

    Can AI replace an Ai Weiwei? 

    The creative arts are also vulnerable to AI-assisted abuse. It’s so easy to steal someone’s music, films, videos, books, indeed all types of creative content. Artists are right to appeal for legal protection. But effective regulation is going to be difficult.  

    There are good reasons, however, for people to regulate themselves. Yes, AI’s potential uses are amazing, even frightening. But it gets its material from trawling every possible type of content that it can via the internet. 

    That content is, by definition, second hand. The result of AI’s trawling of the internet is like a giant bowl of mush. Dip your spoon into it, and it will still be other people’s mush. 

    If you want to do something original, use your own brain to do it. If you don’t use your own intelligence and your own capabilities, they will wither away.

    And so I have done that. This piece may not be brilliant. But I wrote it.


     

    Questions to consider:

    1. If artificial intelligence writes a story or creates a piece of art, can that be considered original?

    2. How can journalists use artificial intelligence to better serve the public?

    3. In what ways to you think artificial intelligence is more helpful or harmful to professions like journalism and the arts?


     

    Source link

  • If memory is the residue of thought, what are we learning from AI?

    If memory is the residue of thought, what are we learning from AI?

    • This is an edited version of a speech given by Josh Freeman, HEPI Policy Manager, to the Cardiff University Biochemical Society Sponsored Seminar Series on AI.

    I want to start with a thought experiment – one that will be on familiar ground for many of us. A lecturer sets an assignment and receives two student essays which are very similar in argument, structure, originality and so on. The difference is that one student used AI and the other didn’t.

    The student who used AI used it, as more than half of students (51%) do, to save time. They knew what they wanted to say, wrote a bullet-pointed list, fed this into ChatGPT and asked it to generate an essay ‘in the style of a 2nd year Biosciences student’ – which is what we know that students are doing. Perhaps they added some finishing touches, like a bit of their own language.

    The second student wrote their essay the old-fashioned way – they wrote a plan, then turned that into a draft, redrafted it, tweaked it and manually wrote their references.

    The question is: Which essay should we value more? They are functionally the same essay – surely we should value them equally?

    I don’t mean which essay should get the higher mark, or whether the student who used AI was cheating. Let’s assume for the moment that what they did was within the rules for this particular course. What I mean is, which essay better shows the fulfilment of the core purposes of a university – instilling intellectual curiosity, critical thinking, personal development in our students?

    I think most of us would instinctively say that something has been lost for the student who used AI. We don’t value students as content creators. We don’t see the value in the essay for its own sake – after all, many of us have seen hundreds or thousands of similar essays in our time in academia. What we value is the process that got the student to that point. There is something fundamental about the writing process, that in the act of writing, you are forced to confront your own thoughts, express them, sit with them. You have to consider how far you really agree with them, or if there is something missing. Though the student who used AI produced the same end result, they didn’t have that same cognitive experience.

    AI is, for the first time, divorcing the output from much of the cognitive process required to get that output. Before AI, if a student submitted an essay, you could be relatively confident – barring the use of essay mills or plagiarism – that they had thought deeply, or at least substantially, about the output they submitted. The content was a good proxy for the process. But with AI, it’s remarkably easy to generate the content without engaging in the process.

    I was a teacher previously, and the mantra we were told again and again was ‘Memory is the residue of thought.’ (With credit to Daniel Willingham.) We remember what we think about. When you have to sit with an essay, or a difficult academic text, it fosters more learning because your brain is working harder. If you can fast-track the essay or just read a summary of the important bits of the text, you skip the work, but you also skip the learning.

    This is a problem for all kinds of reasons, some of which I’ll go into. But in another way, it may also be a good thing. For a long time, the focus has been on the content that students produce, as the best marker for a students’ skills and knowledge. But I hope that AI will force us to think deeply about what process we want students to go through.

    In the time I have left, I want to touch on a few issues raised by our recent survey, showing that the vast majority of students use generative AI, including to help with their assessments.

    The first is that the rabbit is out of the hat. Almost all students are using AI, for a rich variety of purposes, and almost certainly whether or not we tell them they can. That will be obvious to anyone who has received a coursework submission in the last 18 months, but it is so key that it is worth emphasising. Barring the withdrawal of large language models like ChatGPT from the internet (unlikely) or the mass socialisation of our students away from GenAI use (also unlikely, but less so), AI is here to stay.

    The second is that the system of academic assessment developed over decades or more is suddenly and catastrophically not fit for purpose. Again, this will be known to many but I am not sure the sector has fully grappled with the implications of it. All assessments had some level of insecurity, insofar as essay mills and contract cheating existed, but we have always felt these methods were used by relatively few students; we were also able to pass national legislation to crack down on these methods.

    AI is different for two reasons. The first is ease of use – the barriers of seeking out an essay mill and coughing up the money are gone (though it remains true that the most powerful AI models still have a cost). The second is how students reckon with the moral implications. It is clear to almost everyone, I think, that using an essay mill is breaking the rules, so students would usually only use these when they are truly desperate. But AI is different. We saw in the report that there great uncertainty when it comes to using AI – lots of disagreement about what is acceptable or not. When it’s cloudy in this way, it’s easier to justify to yourself that what you’re doing is okay. Most people won’t overtly ‘cheat’ but they might push on hazy boundaries if they can tell a story about why it is acceptable to do so.

    So all of our assessments need to be reviewed. I recently read an essay from UCL Law School, talking about how they will be using 50-100% ‘secure’ assessment, meaning in-person written or oral exams. This is a good start, though it may not even be enough if 50% of your assessments are ‘hackable’ by students with little or no subject knowledge or with no grasp of the skills you are meant to be teaching them. And I am not convinced that ‘secure’ exams are always such. If essay questions are predictable, you can easily use AI to generate some mock essays for you and memorise them, for example.

    This is also why the claims that AI will generate huge efficiency gains for the sector are misplaced, at least in the short term. In the coming years, AI will put huge strain on the sector. Essentially, we are asking all of our staff to be experts in AI tools, even as the tools themselves constantly update. For example, AI tools hallucinate a lot less than they used to and they also produce fake references much less often – and there are now specific tools designed to produce accurate references (such as ChatGPT’s Deep Research or Perplexity.AI). It is an open question as to whether this radical redrawing of assessment is a reasonable ask of the sector at a time when budgets are tight and cuts to staffing are widespread – up to 10,000 jobs lost by the end of the academic year, by some estimates.

    The third issue returns to the thought experiment I presented you with at the start. We will now be forced to think deeply about what skills we want our students to have in an age where AI tools are widely accessible, and then again about how we give our students those skills.

    Think again of those two essays, one of which used AI and one didn’t. There is an argument in favour of the AI-assisted essay if you particularly value teaching AI skills and you think getting AI to help with essays is one way to enhance those skills. But like developing AI-proof assessments, this is a moving target. Some people will remember the obsession with ‘prompt engineering’ in the early days of GenAI – carefully crafting prompts to manufacture very specific answers from chatbots, only for them to update and all that work becoming entirely useless? By virtue of being natural language models, they are frequently very intuitive to use and will only become more so. So it is not at all clear that even the best AI courses available now will be very useful a few years into students’ long and varied careers.

    The same problem applies to courses designed to teach students the limits of AI – such as bias, the use of data without permission, hallucinations, environmental degradation and other challenges which we are hearing lots about. Small innovations could mean, for example, that the environmental cost of AI falls dramatically. There is already some research saying a typical ChatGPT prompt may now use no more energy than a Google search. In a few years’ time, we may be dealing with a very different set of problems and students’ knowledge will be out of date.

    I can’t pretend HEPI has all the answers – though we do have many, and we require all of our publications to include policy solutions, which you are welcome to investigate yourselves on our website. But my view is that the skills students will receive from a university education – critical thinking, problem solving, working as a team, effective communication, resilience – are as critical as ever. In particular, we will probably need to hone in on those skills that AI cannot easily replicate – soft skills of motivating others or building trust, emotional intelligence, critical thinking, which will endure in importance even as AI automates other tasks.

    But the methods we use will need to change. We hear a lot from academics about the enormous administrative burden academics face, for example. In my view, the best case is that AI automates the boring bits of all our jobs – paperwork, producing lesson materials, generating data – and freeing us up to do what matters, which is producing innovative research and spending more time with students. That will make sure AI enhances, rather than threatens, the enormous benefits our degrees impart to students in the coming years.

    Source link

  • Spring 2025 Inclusive Growth and Racial Equity Thought Leadership Lecture Series (Howard University)

    Spring 2025 Inclusive Growth and Racial Equity Thought Leadership Lecture Series (Howard University)

    Scheduled for Feb 20, 2025. The Spring 2024 Inclusive Growth and Racial Equity Thought Leadership Lecture Series will feature a fireside chat with Dr. Ibram X. Kendi, Andrew W. Mellon Professor in the Humanities, Professor of History, Director of the BU Center for Antiracist Research, and National Book Award-winning Author.

     


     

     

    Source link

  • Professor Farid Alatas on ‘The captive mind and anti-colonial thought’

    Professor Farid Alatas on ‘The captive mind and anti-colonial thought’

    by Ibrar Bhatt

    On Monday 2 December 2024, during the online segment of the 2024 SRHE annual conference, Professor Farid Alatas delivered a thought-provoking keynote address in which he emphasised an urgent need for the decolonisation of knowledge within higher education. His lecture was titled ‘The captive mind and anti-colonial thought’ and drew from the themes of his numerous works including Sociological Theory Beyond the Canon (Alatas, 2017).

    Alatas called for a broader, more inclusive framework for teaching sociological theory and the importance of doing so for contemporary higher education. For Alatas, this framework should move beyond a Eurocentric and androcentric focus of traditional curricula, and integrate framings and concepts from non-Western thinkers (including women) to establish a genuinely international perspective.

    In particular, he discussed his detailed engagement with the neglected social theories of Ibn Khaldun, his efforts to develop a ‘neo-Khaldunian theory of sociology’. He also highlighted another exemplar of non-Western thought, the Filipino theorist José Rizal (see Alatas, 2009, 2017). Alatas discussed how such modes sort of non-Western social theory should be incorporated into social science textbooks and teaching curricula.

    Professor Alatas further argued that continuing to rely on theories and concepts from a limited group of countries—primarily Western European and North American—imposes intellectual constraints that are both limiting and potentially harmful for higher education. Using historical examples, such as the divergent interpretations of the Crusades (viewed as religious wars from a European perspective but as colonial invasions from a Middle Eastern perspective), he illustrated how perspectives confined to the European experience often fail to account for the nuanced framing of such events in other regions. Such epistemic blind spots stress the need for higher education to embrace diverse ways of knowing that have long existed across global traditions.

    Beyond critiquing Eurocentrism, Professor Alatas acknowledged the systemic challenges within institutions in the Global South, which also inhibit knowledge production. He urged for inward critical reflection within these contexts, addressing issues like resource constraints, institutional biases, racism, ethnocentrism, and the undervaluing of indigenous epistemologies through the internalisation of a ‘captive mindset’. Only by addressing these intertwined challenges, he concluded, can universities foster a more equitable and inclusive intellectual environment, and one that is more practically relevant and applicable to higher education in former colonised settings.

    This keynote was a call to action for educators, researchers, and institutions to rethink and restructure the ways in which sociological and other academic canons are constructed and taught. But first, there is an important reflection that must be undertaken, and an acknowledgement, grounded in epistemic humility, that there is more to social theory than Eurocentrism.

    There was not enough time to deeply engage with some of the concepts in his keynote; therefore, I hope to invite Professor Farid Alatas for an in-person conversation on these topics during his visit to the UK in 2025. Please look out for this event advertisement.

    The recording of this keynote address is now available from https://youtu.be/4Cf6C9wP6Ac?list=PLZN6b5AbqH3BnyGcdvF5wLCmbQn37cFgr

    Ibrar Bhatt is Senior Lecturer at the School of Social Sciences, Education & Social Work at Queen’s University Belfast (Northern Ireland). His research interests encompass applied linguistics, higher education, and digital humanities. He is also an Executive Editor for the journal ‘Teaching in Higher Education: Critical Perspective’s, and on the Editorial Board for the journal ‘Postdigital Science & Education’.

    His recent books include ‘Critical Perspectives on Teaching in the Multilingual University’ (Routledge), ‘A Semiotics of Muslimness in China’ (with Cambridge University Press), and he is currently writing his next book ‘Heritage Literacy in the Lives of Chinese Muslim’, which will be published next year with Bloomsbury.

    He was a member of the Governing Council of the Society for Research into Higher Education between 2018-2024, convened its Digital University Network between 2015-2022, and is currently the founding convener of the Society’s Multilingual University Network.

    References

    Alatas SF (2009) ‘Religion and reform: Two exemplars for autonomous sociology in the non-Western context’ In: Sujata P (ed) The International Handbook of Diverse Sociological Traditions London: Sage pp 29–39

    Alatas SF (2017) ‘Jose Rizal (1861–1896)’ in Alatas SF and Sinha V (eds) Sociological Theory Beyond the Canon London: Palgrave Macmillan pp 143–170

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link