Tag: Artificial

  • Affective Intelligence in Artificial Intelligence

    Affective Intelligence in Artificial Intelligence

    Looking back on my lifelong history of learning experiences, the ones that I would rank as most effective and memorable were the ones in which the instructor truly saw me, understood my motivations and encouraged me to apply the learning to my own circumstances. This critical aspect of teaching and learning is included in most every meaningful pedagogical approach. We commonly recognize that the best practices of our field include a sensitivity to and understanding of the learner’s experiences, motivations and goals. Without responding to the learner’s needs, we will fall short of the common goal of internalizing whatever learning takes place.

    Some might believe that AI, as a computer-based system, merely addresses the facts, formulas and figures of quantitative learning rather than emotionally intelligent engagement with the learner. In its initial development that may have been true, however, AI has developed the ability to recognize and respond to emotional aspects of the learner’s responses.

    In September 2024, the South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference included research by four professors from the University of West Attica in Egaleo, Greece—Theofanis Tasoulas, Christos Troussas, Phivos Mylonas and Cleo Sgouropoulou—titled “Affective Computing in Intelligent Tutoring Systems: Exploring Insights and Innovations.” The authors described the importance of including affective engagement into developing learning systems:

    “Integrating intelligent tutoring systems (ITS) into education has significantly enriched personalized learning experiences for students and educators alike. However, these systems often neglect the critical role of emotions in the learning process. By integrating affective computing, which empowers computers to recognize and respond to emotions, ITS can foster more engaging and impactful learning environments. This paper explores the utilization of affective computing techniques, such as facial expression analysis and voice modulation, to enhance ITS functionality. Case studies and existing systems have been scrutinized to comprehend design decisions, outcomes, and guidelines for effective integration, thereby enhancing learning outcomes and user engagement. Furthermore, this study underscores the necessity of considering emotional aspects in the development and deployment of educational technology to optimize its influence on student learning and well-being. A major conclusion of this research is that integration of affective computing into ITS empowers educators to customize learning experiences to students’ emotional states, thereby enhancing educational effectiveness.”

    In a special edition of the Journal of Education Sciences published in August 2024, Jorge Fernández-Herrero writes in a paper titled “Evaluating Recent Advances in Affective Intelligent Tutoring Systems: A Scoping Review of Educational Impacts and Future Prospects,”

    “Affective intelligent tutoring systems (ATSs) are gaining recognition for their role in personalized learning through adaptive automated education based on students’ affective states. This scoping review evaluates recent advancements and the educational impact of ATSs, following PRISMA guidelines for article selection and analysis. A structured search of the Web of Science (WoS) and Scopus databases resulted in 30 studies covering 27 distinct ATSs. These studies assess the effectiveness of ATSs in meeting learners’ emotional and cognitive needs. This review examines the technical and pedagogical aspects of ATSs, focusing on how emotional recognition technologies are used to customize educational content and feedback, enhancing learning experiences. The primary characteristics of the selected studies are described, emphasizing key technical features and their implications for educational outcomes. The discussion highlights the importance of emotional intelligence in educational environments and the potential of ATSs to improve learning processes.”

    Notably, agentic AI models have been assigned tasks to monitor and provide adaptations to respond to the changing emotions of learners. Tom Mangan wrote last month in an EdTech article titled “AI Agents in Higher Education: Transforming Student Services and Support,”

    “Agents will be able to gather data from multiple sources to assess a student’s progress across multiple courses. If the student starts falling behind, processes could kick in to help them catch up. Agents can relieve teachers and administrators from time-consuming chores such as grading multiple-choice tests and monitoring attendance. The idea is catching on. Andrew Ng, co-founder of Coursera, launched a startup called Kira Learning to ease burdens on overworked teachers. ‘Kira’s AI tutor works alongside teachers as an intelligent co-educator, adapting in real-time to each student’s learning style and emotional state,’ Andrea Pasinetti, Kira Learning’s CEO, says in an interview with The Observer.”

    We are no longer limited to transactional chatbots that respond to questions from students without regard to their background, whether that be academic, experiential or even emotional. Using the capabilities of advanced AI, our engagements can analyze, identify and adapt to a range of learner emotions. These components are often the hallmark of excellent, experienced faculty members who do not teach only to the median of the class but instead offer personalized responses to meet the interests and needs of individual students.

    As we look ahead to the last half of this semester, and succeeding semesters, we can expect that enhanced technology will enable us to better serve our learners. We will be able to identify growing frustration where that may be the case or the opportunity to accelerate the pace of the learning experience when learners display comfort with the learning materials and readiness to advance at their own pace ahead of others in the class.

    We all recognize that this field is moving very rapidly. It is important that we have leaders at all levels who are prepared to experiment with the emergent technologies, demonstrate their capabilities and lead discussions on the potential for implementations. The results can be most rewarding, with a higher percentage of learners more comfortably reaching their goals. Are you prepared to take the lead in demonstrating these technologies to your colleagues?

    Source link

  • New HEPI Policy Note: Using Artificial Intelligence (AI) to Advance Translational Research

    New HEPI Policy Note: Using Artificial Intelligence (AI) to Advance Translational Research

    Author:
    Rose Stephenson and Lan Murdock

    Published:

    A new report by HEPI and Taylor & Francis explores the potential of AI to advance translational research and accelerate the journey from scientific discovery to real-world application. 

    Using Artificial Intelligence (AI) to Advance Translational Research (HEPI Policy Note 67), authored by Rose Stephenson, Director of Policy and Strategy at HEPI, and Lan Murdock, Senior Corporate Communications Manager at Taylor & Francis, draws on discussions at a roundtable of higher education leaders, researchers, AI innovators and funders, as well as a range of research case studies, to evaluate the future role of AI in translational research. 

    The report finds that AI has the potential to strengthen the UK’s translational research system, but that realising these benefits will require careful implementation, appropriate governance and sustained investment. 

    You can find the press release and read the full report here.

    Source link

  • Artificial Intelligence, Mass Surveillance, and the Quiet Reengineering of Higher Education

    Artificial Intelligence, Mass Surveillance, and the Quiet Reengineering of Higher Education

    The Higher Education Inquirer has approached artificial intelligence not as a speculative future but as a present reality already reshaping higher education. Long before university leaders and consultants embraced Artificial Intelligence (AI) as an abstract promise, HEI was using these tools directly while documenting how they were being embedded into academic institutions. What has become increasingly clear is that AI is not merely an educational technology. It is a structural force accelerating corporatization, automation, and mass surveillance within higher education.

    Artificial intelligence enters the university through the language of efficiency and personalization. Administrators speak of innovation, student success, and institutional competitiveness. Yet beneath this language lies a deeper transformation. Teaching, advising, grading, counseling, and evaluation are increasingly reduced to measurable functions rather than human relationships. Once learning is fragmented into functions, it becomes easily automated, monitored, outsourced, and scaled.

    This shift has long been visible in for-profit and online institutions, where scripted instruction, learning management systems, predictive analytics, and automated advising have replaced meaningful faculty engagement. What is new is that nonprofit and elite universities are now adopting similar systems, enhanced by powerful AI tools and vast data collection infrastructures. The result is the emergence of the robocollege, an institution optimized for credential production, labor reduction, and data extraction rather than intellectual growth.

    Students are told that AI-driven education will prepare them for the future economy. In reality, many are being trained for an economy defined by automation, precarity, and diminished human agency. Rather than empowering students to challenge technological power, institutions increasingly socialize them to adapt to it. Compliance, constant assessment, and algorithmic feedback replace intellectual risk-taking and critical inquiry.

    These developments reinforce and intensify inequality. Working-class students, student loan debtors, and marginalized populations are disproportionately enrolled in institutions where AI-mediated education and automated oversight are most aggressively deployed. Meanwhile, elite students continue to receive human mentorship, small seminars, and insulation from constant monitoring. Artificial intelligence thus deepens a two-tier system of higher education, one human and one surveilled.

    Mass surveillance is no longer peripheral to higher education. It is central to how AI operates on campus. Predictive analytics flag students as “at risk” before they fail, often without transparency or consent. Proctoring software monitors faces, eye movements, living spaces, and biometric data. Engagement dashboards track clicks, keystrokes, time spent on screens, and behavioral patterns. These systems claim to support learning while normalizing constant observation.

    Students are increasingly treated as data subjects rather than citizens in a learning community. Faculty are pressured to comply with opaque systems they did not design and cannot audit. The data harvested through these platforms flows upward to administrators, vendors, private equity-backed education companies, and, in some cases, government and security-linked entities. Higher education becomes a testing ground for surveillance technologies later deployed across workplaces and society at large.

    At the top of the academic hierarchy, a small group of elite universities dominates global AI research. These institutions maintain close relationships with Big Tech firms, defense contractors, and venture capital interests. They shape not only innovation but ideology, presenting AI development as inevitable and benevolent while supplying talent and legitimacy to systems of automation, surveillance, and control. Ethics initiatives and AI principles proliferate even as accountability remains elusive.

    Cultural warnings about technological obsolescence no longer feel theoretical. Faculty are told to adapt or be replaced by automated systems. Students are told to compete with algorithms while being monitored by them. Administrators frame automation and surveillance as unavoidable. What is absent from these conversations is moral courage. Higher education rarely asks whether it should participate in building systems that render human judgment, privacy, and dignity increasingly expendable.

    Artificial intelligence does not have to dehumanize higher education, but resisting that outcome requires choices institutions have largely avoided. It requires valuing human labor over scalability, privacy over control, and education as a public good rather than a data pipeline. It requires democratic governance instead of technocratic management and surveillance by default.

    For years, the Higher Education Inquirer has examined artificial intelligence not as a neutral tool or a distant threat, but as a technology shaped by power, profit, and institutional priorities. The future of higher education is not being determined by machines alone. It is being determined by decisions made by university leaders, technology firms, and policymakers who choose surveillance and efficiency over humanity.

    The question is no longer whether AI will reshape higher education.

    The question is whether higher education will resist becoming a fully surveilled system that trains students to accept a monitored, automated, and diminished future.


    Sources

    Higher Education Inquirer, Robocolleges, Artificial Intelligence, and the Dehumanization of Higher Education



    Higher Education Inquirer, AI-Robot Capitalists Will Destroy the Human Economy (Randall Collins)



    Higher Education Inquirer, University of Phoenix: Training Folks for Robowork



    Higher Education Inquirer, “The Obsolete Man”: A Twilight Zone Warning for the Trump Era and the Age of AI



    Higher Education Inquirer, Stanford, Princeton, and MIT Among Top U.S. Universities Driving Global AI Research (Studocu)



    Higher Education Inquirer, Tech Titans, Ideologues, and the Future of American Higher Education — 2026 Update

    Source link

  • Nicola Rollock: Progress on racial justice and equity in higher education is “artificial”

    Nicola Rollock: Progress on racial justice and equity in higher education is “artificial”

    Nearly seven years ago, in February 2019, UCU published Staying Power, an investigation into the professional experiences of 20 Black woman professors in UK higher education, authored by Nicola Rollock. At the time, the total number of UK Black women professors numbered only 25.

    Against the backdrop of an often highly hierarchical higher education academic culture that assumes capacity for high workloads, and with numerous unwritten codes of conduct, many of Rollock’s respondents documented instances of bullying, racial stereotyping, low-level aggressive behaviour and the constant tacit expectation to prove themselves, leading to feelings of stress, anxiety, exhaustion and burnout. But despite these experiences, they had navigated a career path to professorship, adopting strategies to advance their careers, while absorbing setbacks and blockages strewn in their paths.

    In the intervening years, the conversation about race, equity and higher education intensified. Later in 2019 recent graduates Chelsea Kwakye and Ore Ogunbiyi published Taking up space, which documented their experiences as Black students at the University of Cambridge. In October of that year the Equality and Human Rights Commission published the findings of a national investigation into racial harassment in universities.

    The UK higher education sector was pursuing action on race awarding gaps, and developing the Race Equality Charter to embed anti-racist practice in institutions. Students’ unions campaigned for ethnic and cultural diversity in the curriculum, and for bursaries and additional support to open up pathways for Black students into research careers. Senior appointments were made to spearhead equality, diversity and inclusion, and commitments to change were published. In 2020, Rollock curated Phenomenal women: portraits of Black female professors, a landmark photography exhibition at London Southbank Centre which then went on to be displayed at the University of Cambridge.

    The conversation reached a peak in the wake of the global outcry following the murder of George Floyd in the US in the early months of the Covid-19 pandemic and during the ensuing Black Lives Matter protests. And while it was understood that work on anti-racism was often slow, and under-resourced, there was a sense at the time that some in the sector were prepared both to confront its history and adjust its practice and culture in the present.

    Looking around today, the picture seems much more muted. There’s been political backlash against the Black Lives Matter movement, and against the notion of institutional and structural racism more generally. “Woke” is more frequently heard as a term of criticism rather than approbation. And though 97 institutions have signed up to the Race Equality Charter and work on awarding gaps has been integrated into access and participation policy, the sense of urgency in the national anti-racism agenda has ebbed.

    What lies beneath the cycle

    For Nicola Rollock, who now divides her time between a professorship in social policy and race at Kings College London, and consultancy and public speaking, this cycle is nothing new. Earlier in her career she was commissioned by the Runnymede Trust to investigate the extent to which the recommendations of the Macpherson inquiry (which followed the murder of Stephen Lawrence and the failure of the Metropolitan Police to bring his killers to justice) had been implemented in the decade following its publication.

    Some of the recommendations were relatively straightforward: senior investigating officers (SIOs) should be appointed when there is a murder investigation – tick. Families should be assigned a family liaison officer, when they have experienced a murder – tick,” she says. “But the recommendations pertaining to race – disparities in stop and search, the recruitment, retention and progression of Black and minority ethnic officers – the data had barely moved over the ten year period between 1999 and 2008–9. I was stunned. At the time, I couldn’t understand how that was possible.

    Rollock’s subsequent work has sought to explain why, despite periodic bouts of collective will to action on racism, it persists – and to lay bare the structures and behaviours that allow it to persist even as the white majority claims to be committed to eradicating it. In 2023 she published The Racial Code – a genre-busting tour de force that forensically unpacks the various ways that organisations and individuals perform racial justice in ways that continually fail to achieve a meaningful impact, told through the medium of short stories and vignettes that offer insight into what it feels like to experience racism.

    One story in particular, set in a university committee meeting, at which a Black academic is finally awarded a long-awaited (and inadequate) promotion, and responds in the only way she feels is open to her, offers a particularly forceful insight into the frustration felt by Black women in academia at what can feel like being simultaneously undervalued and expected to be grateful to be there at all. Recurring motifs throughout the book, such as the Count Me In! diversity awards – embraced with enthusiasm by white characters and viewed with deep scepticism by Black ones – demonstrate the ways that while racism may manifest subtle differences across different contexts and industries, it thrives everywhere in shallow and performative efforts to tackle it.

    For Rollock, the choice of fiction as a medium is a deliberate effort to change hearts as well as minds. Though each of the propositions offered in her stories are grounded in evidence; they are, indeed, the opposite of fictional, the story format affords much greater opportunity for fostering empathetic understanding:

    Many of us know the data, we know the headlines, but we don’t know about the people behind the headlines: what is it like to be part of a group that is under-represented? How does it feel to be overlooked for promotion despite possessing the right qualifications and experience? I don’t think we truly understand what it is to fight, to strategise, to manage disappointment predicated on the colour of one’s skin. For me, storytelling is a way of providing that connection. It is a way of giving life to feelings.

    For white readers, The Racial Code offers a glimmer of insight into the experience of marginalisation. And for Black readers, it offers a language and a way of understanding and giving coherence to experiences of racism.

    Where we are now

    Here, Nicola Rollock offers her often sobering reflections on the last six years in response to my prompts – sharing her observations of the same patterns of injustice she has been analysing throughout her career.

    Debbie McVitty: Since 2019–20 we’ve seen a lot of focus on EDI in universities and on racial justice specifically – a number of senior appointments, public commitments, working groups and initiatives. And then, the political backlash, the anti-woke agenda, the attacks on “DEI” – how do you make sense of the period we’ve been through? Has there been “progress”? How should we understand the nature of that progress, if so? And what do we need to be wary of?

    Nicola Rollock: I have long been interested in why change happens at certain moments: what are the factors that enable change and what is the context in which it is most likely to occur. This is largely influenced by my work on the Stephen Lawrence Inquiry when, as a young researcher, I believed that we were at a historic turning point when it came to racial justice only to see, in 2009, political commitment subsequently and deliberately wane.

    In 2020, when George Floyd was murdered, I was simultaneously disturbed by what had happened and attentive to people’s reaction. Many white people described themselves as having “woken up” to the traumas of racism as a result of his death. Books on race and racism rapidly sold out and I couldn’t help but wonder, where on earth have you been, that you’re only waking up now? I – and others who work on these issues – have been sat in meetings with you, in board rooms, universities, in Parliament, have marched on the streets repeatedly making a case for our dignity, for respect, for equity – and it is only now that you decide that you are waking up?

    What happened around Floyd deeply occupied my mind. For a long time, I played with the idea of a film set in a dystopian future where Black communities agree to deliberately sacrifice the life of a Black man or woman every five years to be murdered by a white person in the most horrific of circumstances. The ordeal would be recorded and shared to ensure broad reach and the fact of the crime would have to be unequivocal to ensure that white minds were convinced by the stark racist brutality of what had occurred.

    The aim of the sacrifice? To keep the fact of racism alive in the minds of those who, by and large, have the most power to implement the type of change that racially minoritised groups demand.This dynamic is in itself, of course, perverse: the idea of begging for change that history indicates is unlikely to come in the form that we want. The approach then must be not to beg for change but to enable or force it in some other, more agentic way that centres our humanity, our dignity and wellbeing.

    Moving back to reality, I would argue that there has been a complacency on the part of liberal whites about the prevalence and permanence of racism and how it operates which is why so many were shocked and awakened when Floyd was murdered. This complacency is also endemic within politics. Politicians on the left of the spectrum have not shown sufficient competence or leadership around racial justice and have failed to be proactive in fostering equity and good relations between communities. Those on the right continue to draw on superficial markers to indicate racial progress, such as pointing to the ethnic mix of the Cabinet, or permitting flimsy and dangerous comments about racism or racially minoritised communities to persist.

    Both sets of positions keep us, as a society, racially illiterate and naive and bickering amongst ourselves while the radical right builds momentum with a comparatively strong narrative. We are now in a position where those on the left and the right of the political spectrum are acting in response to the radical right. These are dark times.

    Universities themselves are, of course, subject to political pressure and regulation but even taking account of this, I would argue that the lens or understanding of racial justice within the sector is fundamentally flawed. Too often, universities achieve awards or recognition for equity-related initiatives which are then (mis)used as part of their PR branding even while their racially minoritised staff continue to suffer. Or artificial targets are established as aspirational benchmarks for change.

    This is most evident in the discussions surrounding the representation of Black female professors. In the years following my research, I have observed a fixation with increasing the number of these academics while ignoring their actual representation. So for example, in 2019–20 the academic year in which Floyd was murdered, there were 40 Black female professors in total (i.e. UK and non-UK nationals) within UK universities. They made up just 2 per cent of the Black female academic population. Compared with other reported ethnic groups, Black female academics were the least likely of all female academics to be professors as a proportion of their population.

    Fast forward to the 2022–23 figures which were published in 2024, the most recent year available at the time of this interview. They show that the number of Black female professors increased to 55 but when we look at their representation only 1.8 per cent of Black female scholars were professors – a decrease from 2019–20. And, in both academic years, Black female professors made up the smallest percentage of the female professoriate overall (0.6 percent in 2019-20 and 0.8 percent in 2022-23). In other words, the representation of Black female professors as a group remains relatively static in the context of changes to the broader professoriate. Numbers alone won’t show us this and, in fact, perpetuate a false narrative of progress. It indicates that current interventions to increase the representation of Black female professors are not working – or, at best, are maintaining the status quo – and we are overlooking the levers that really impact change.

    Universities themselves are responsible for this “artificial progress” narrative via their press releases which too many of us are quick to consume as fact. For example, a university will announce the first Black professor of, say, Racially Marginalised Writing and we fall over ourselves in jubilation ignoring the fact that the university and the academic choose the professorial title (it is arbitrary) and, that there is a Black academic at the university down the road who is Professor of Global Majority Writing covering exactly the same themes as their newly appointed peer.

    The same can be said of press releases about appointments of the “youngest” professor within an institution or nationally. We never ask, the youngest of how many or, how do you know, given that official statistics do not show race by age group. Look closely and you may well find that there are no more than say five Black professors at the institution and most were appointed in the last couple of years. Is being the youngest of five a radical enough basis for celebrating advancement? I would suggest not.

    Debbie McVitty: Staying power – like The Racial Code – was powerful in its capturing and articulation of the everyday frustrations and the burdens of being marginalised, but with the clear link to structural and organisational systems that enable those problematic interpersonal relationships and to some extent seem to allow or endorse their hiding in plain sight. How helpful is the concept of “lived experience” as data to prompt institutional change, or in what conditions is it most useful?

    Nicola Rollock: I am fundamentally uncomfortable with the phrase “lived experience.” In the context of race, the term forces underserved groups to pronounce their status – as if for inspection to satisfy the whims of others when the fact is it is those others who are not being sufficiently attentive to inequity. We end up compensating for their failures. My concern with regard to race is that lived experience becomes the benchmark for intervention and standards: it is seen as sufficient that an initiative about race includes or is led by some Black people irrespective of their subject specialism or expertise. The fact that racial justice is a subject specialism is ignored. When we foreground lived experience over subject specialism, the objective is not real change, it is tokenism. I would like to see the subject of racial justice treated with the same degree of rigour and seriousness as we treat, say science or mathematics.

    Debbie McVitty: Another really critical theme across both Staying Power and The Racial Code is agency – the coping tactics and strategies Black women (and men) use to function in what they can often experience as a hostile, toxic cultural environment, whether that’s seeking out allies, being highly strategic and dogged about promotion processes, developing their own analytical framework to help them make sense of their experience, and so on. Covid in particular drove a conversation about work-life balance, wellbeing and compassionate leadership – do you think Black women in academia have been in a position to benefit from any of that? Have the go-to coping strategies changed as a result?

    Nicola Rollock: Universities are not places which foreground well-being. Lunchtime yoga sessions or tips about how to improve work-life balance tend to be rendered meaningless in a context where concerns about financial stability, student numbers, political unrest and national and international performance tables take precedence. So many of us have filled in forms aimed at capturing how we spend our time as academics while being aware that they are performative: they do not reflect the breadth of the activities that really take up our time.

    I find that Black scholars are often contacted to save failed relationships between white supervisors and Black doctoral students or to offer mentorship and support to Black students and junior colleagues. Then there are reference requests from Black scholars from across the globe who you want to support in the spirit of fighting the system and giving back. And this can be on top of the organisational challenges that you yourself are facing. None of this is documented anywhere. We don’t receive time off in lieu or financial bonuses for this work. It often sits casually under the often uninterrogated banner of “service.” In short, if anyone is interested in work-life balance, they should avoid academia.

    Debbie McVitty: One of the things we have unfortunately learned from the past six years is that engagement with racial justice does tend to ebb and flow and is subject to political winds and whims. What can be done to keep institutional leadership focused on these issues and keep working on building more just institutions? How can racial justice work become more sustainable?

    Nicola Rollock: Public and political commitment to EDI or what we might think of more broadly as equity, tends to move in waves and as a reaction to external pressures or pinch points. This is concerning for several reasons not least because it ignores the data and evidence about the persistence of inequity whether by social class, gender, disability.

    Commitment to advancing racial justice varies depending on one’s racial identity and understanding of the issues. Institutions will only engage with it seriously if they are compelled to do so and if there are consequences for not doing so. We saw this with the awarding gap.

    I would also say, perhaps controversially, that we racially minoritised groups need to more readily accept the history and characteristics of racial injustice. For example, if a white senior leader says they refuse to accept institutional racism, my view is that we should not spend our energy trying to convince them otherwise. We only deplete ourselves and waste time. Instead, look for pinch points or strategic points of intervention which might also work to that senior leader’s interests.

    We must also establish accurate and more stringent goals as our ambitions for racial progress and not allow our desperation for change to lessen our standards. For example, I have spent a considerable amount of time recently working in policing. Whenever something goes wrong around race, there are those who demand the Commissioner’s resignation. Why? Do we really think the next person to be appointed is going to offer a miracle transformation on race? And what influence do we really have on the appointment’s process? I am not opposed to calling for anyone’s resignation but it has to be done as part of a carefully thought through, strategic plan as opposed to being an act of frustration. I am aware however that acts of frustration are better meat for newspaper headlines over my efforts to foreground strategy and radical change.

    There is a further point that your question does not speak to which is the need for self-affirmation and self-care. I think we need to be better at working out what we want for ourselves that is not contingent on our arguing with white stakeholders and which holds on to and foregrounds our dignity, well-being and humanity. This is something I wish I had understood before I entered the workplace and specialised in social policy and race. As much as I love research, it would have probably led to my making different career choices.

    One key way in which I believe this work can be sustained is by paying closer attention to our “Elders” – those academics, activists and campaigners who have already fought battles and had arguments from which we should learn and build upon. I would like to see greater integration and connection with what we plan to do today and tomorrow informed by what happened yesterday.

    Source link

  • Experts react to artificial intelligence plan – Campus Review

    Experts react to artificial intelligence plan – Campus Review

    Australia’s first national plan for artificial intelligence aims to upskill workers to boost productivity, but will leave the tech largely unregulated and without its own legislation to operate under.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • How Artificial Intelligence Is Reshaping College Planning

    How Artificial Intelligence Is Reshaping College Planning

    What does the latest research tell us about students using AI for college planning?

    If you have spent time with today’s high school students, you know their college search journey looks nothing like it did ten, or even five, years ago. A glossy brochure or a well-timed postcard still has a place. However, the first “hello” increasingly comes through a digital assistant, a TikTok video, or a quick artificial intelligence–powered search.

    Let us not pretend artificial intelligence (AI) is everyone’s new best friend. Some students are eager, some are eye-rolling, and plenty are stuck in the “maybe” camp. That mix of excitement and hesitation is real, and it deserves as much attention as hype.

    The data is clear: nearly half of students (45 percent) have already used a digital AI assistant on a college website, with usage peaking among 9th- and 10th-graders (RNL, Halda, & Modern Campus, 2025). At the same time, a full third of students nationwide have turned to tools like ChatGPT to explore colleges, scholarships, and even essay help (RNL & Teen Voice, 2025).

    This trend is playing out nationwide, with major news outlets reporting that AI chatbots are becoming a common part of the college application process, assisting students with everything from brainstorming essays to navigating deadlines (Singer, 2023).

    For many students, AI is not futuristic; it is already woven into how they imagine, explore, and narrow their choices. Recent reporting confirms that AI-driven college search platforms are helping more students, especially those without access to personalized guidance, find the right fit and expand their options beyond what they might have considered on their own (Greenberg, 2025).

    Beyond RNL: What other research shows

    The RNL findings fit a much bigger story about how AI changes education. Around the world, researchers are watching students test, tinker, and sometimes wrestle with what these tools mean for learning and planning.

    One line of research looks at predictive modeling. Recent studies have shown that AI-driven platforms can analyze student data, grades, extracurricular activities, and demographics to predict which students are likely to pursue college and which might need extra support (Eid, Mansouri, & Miled, 2024). By flagging students at risk of falling off the college pathway, these predictive systems allow counselors to intervene earlier, potentially changing a student’s trajectory.

    Another cluster of studies zeroes in on personalized guidance. Tools built around a student’s interests and goals can recommend classes, extracurriculars, and colleges that “fit” better than a generic list. This is especially important in schools where one counselor may juggle hundreds of students (Majjate et al., 2023).

    Meanwhile, students are already using AI, sometimes in ways that make their teachers nervous. A Swedish study added some nuance: the most confident students use AI the most, while those who are already unsure of their skills tend to hold back (Klarin, 2024). That raises real equity questions about who benefits.

    And not all students are fans. Some research highlights concerns about privacy, over-reliance, and losing the chance to build their problem-solving muscles. It is a reminder that skepticism is not resistance for resistance’s sake but a way of protecting what matters to them.

    On the institutional side, surveys suggest that many colleges are preparing to use AI in admissions, whether for transcript analysis or essay review. Recent coverage underscores that admissions offices are increasingly turning to AI tools to streamline application review, identify best-fit students, and even personalize outreach (Barnard, 2024).

    If all of this feels like a promise and a warning label, it is because it is. AI can democratize access to information, but it can also amplify bias. Students know that. And they want us to take their concerns seriously.

    Empower your leadership and staff to harness the power of AI.

    Don’t get left behind in the AI transformation for higher education. See how RNL’s AI Education Services can help your leaders and staff unlock the full potential of AI on your campus.

    Learn more

    Meet the pioneers, aspirers, resistors, and fence-sitters

    As revealed by our research in The AI Divide in College Planning (RNL & Teen Voice, 2025), not all students approach artificial intelligence the same way. Four personas stand out:

    • Pioneers are already deep in the mix, using artificial intelligence for research, essays, and scholarship searches. Many say it has opened doors to colleges they might not have even considered otherwise.
    • Aspirers are curious but want proof. They like the idea of scholarship searches or cost planning, but need easy, free tools and success stories before they commit.
    • Resistors lean on counselors and family. They are worried about accuracy and privacy, but might come around if an advisor they trust introduces the tool.
    • Fence-Sitters are classic “wait and see” students. A third might trust artificial intelligence to guide them through the application process, but the majority are still unsure.

    The takeaway? There is no single “artificial intelligence student.” Institutions need flexible strategies that welcome the eager, reassure the cautious, and do not alienate the skeptics.

    What happens after the chatbot says, “Hello“?

    One of the most striking findings from the E-Expectations study is that students rarely stop at the chatbot (RNL, Halda, & Modern Campus, 2025). After engaging with an AI assistant, they move. Twenty-nine percent email admissions directly, 28% click deeper into the website, 27% fill out an inquiry form, and almost a quarter apply.

    In other words, that little chat bubble is not just answering frequently asked questions. It is a launchpad.

    Personalization meets privacy

    Here is another twist. While most students (61%) want personalization, they want it on their terms. Nearly half prefer to filter and customize their content, while only 16% want the college to decide automatically (RNL, Halda, & Modern Campus, 2025).

    That is the sweet spot for artificial intelligence: not deciding for students but giving them the levers to design their journey.

    What this means for your enrollment teams

    • AI is not just a front-end feature but a funnel mover. Treat chatbot engagement like an inquiry. Have a system ready to respond quickly when a student shifts from chatting to acting.
    • Remember the personas. Pioneers want depth, Aspirers want reassurance, Resistors want trusted guides, and Fence-Sitters want time. Design communications that honor those differences instead of pushing one script for all.
    • Personalization is not about guessing. It is about giving students control. Build tools that let them filter, sort, search, and resist the temptation to over-curate their journey.
    • AI is a natural fit for cost and scholarship exploration. If you want to hook Aspirers, put AI into your net price calculators or scholarship finders.
    • Virtual tours and event registration bots should not feel like gimmicks. When done well, they can bridge the gap between interest and visit, giving students confidence before setting foot on campus.

    Download the complete reports from RNL and our partners to see what students are telling us directly:

    Report: The AI Divide in College Planning, image of two female college students sitting on steps and looking at a laptop
    The AI Divide in College Planning
    References

    Source link

  • ARTIFICIAL INTELLIGENCE AND THE FUTURE OF HBCUS: A CALL FOR INVESTMENT, INNOVATION, AND INCLUSION

    ARTIFICIAL INTELLIGENCE AND THE FUTURE OF HBCUS: A CALL FOR INVESTMENT, INNOVATION, AND INCLUSION

    Dr. Emmanuel LalandeHistorically Black Colleges and Universities (HBCUs) have always stood on the frontlines of educational equity, carving pathways to excellence for generations of Black students against overwhelming odds. Today, as higher education faces a shift driven by technology, declining enrollment, and resource disparities, a new opportunity emerges: the power of Artificial Intelligence (AI) to reshape, reimagine, and reinforce the mission of HBCUs.

    From admissions automation and predictive analytics to personalized learning and AI-powered tutoring, artificial intelligence is no longer theoretical, it is operational. At large institutions, AI-driven chatbots and enrollment algorithms have already improved student engagement and reduced summer melt. Meanwhile, HBCUs, particularly smaller and underfunded ones, risk being left behind.

    The imperative for HBCUs to act now is not about chasing trends about survival, relevance, and reclaiming leadership in shaping the future of Black education.

    AI as a Force Aligned with the HBCU Mission

    Artificial intelligence, when developed and implemented with intention and ethics, can be one of the most powerful tools for educational justice. HBCUs already do more with less. They enroll 10% of Black students in higher education and produce nearly 20% of all Black graduates. These institutions are responsible for over 25% of Black graduates in STEM fields, and they produce a significant share of Black teachers, judges, engineers, and public servants.

    The power of AI can amplify this legacy.

    • Predictive analytics can flag at-risk students based on attendance, financial aid gaps, and academic performance, helping retention teams intervene before a student drops out.
    • AI chatbots can provide round-the-clock support to students navigating complex enrollment, financial aid, or housing questions.
    • AI tutors and adaptive platforms can meet students where they are, especially for those in developmental math, science, or writing courses.
    • Smart scheduling and resource optimization tools can help HBCUs streamline operations, offering courses more efficiently and improving completion rates.

    For small HBCUs with limited staff, outdated technology, and tuition-driven models, AI can serve as a strategic equalizer. But accessing these tools requires intentional partnerships, resources, and cultural buy-in.

    The Philanthropic Moment: A Unique Opportunity

    The recent announcement from the Bill & Melinda Gates Foundation that it plans to spend its entire $200 billion endowment by 2045 presents a monumental opportunity. The foundation has declared a sharpened focus on “unlocking opportunity” through education, including major investments in AI-powered innovations in K-12 and higher education, particularly in mathematics and student learning platforms.

    One such investment is in Magma Math, an AI-driven platform that helps teachers deliver personalized math instruction. The foundation is also actively funding research and development around how AI can close opportunity gaps in postsecondary education and increase economic mobility. Their call for “AI for Equity” aligns with the HBCU mission like no other.

    Now is the time for HBCUs to boldly approach philanthropic organizations like the Gates Foundation as strategic partners capable of leading equity-driven AI implementation. 

    Other foundations should follow suit. Lumina Foundation, Carnegie Corporation, Kresge Foundation, and Strada Education Network have all expressed interest in digital learning and postsecondary success. A targeted, collaborative initiative to equip HBCUs with AI infrastructure, training, and research capacity could be transformative.

    Tech Industry Engagement: From Tokenism to True Partnership

    • The tech industry has begun investing in HBCUs, but more is needed.
    • OpenAI recently partnered with North Carolina Central University (NCCU) to support AI literacy through its Institute for Artificial Intelligence and Emerging Research. The vision includes scaling support to other HBCUs.
    • Intel has committed $750,000 to Morgan State University to advance research in AI, data science, and cybersecurity.
    • Amazon launched the Educator Enablement Program, supporting faculty at HBCUs in learning and teaching AI-related curricula.
    • Apple and Google have supported HBCU initiatives around coding, machine learning, and entrepreneurship, though these efforts are often episodic or branding-focused. What’s needed now is sustained, institutional investment.
    • Huston-Tillotson University hosted an inaugural HBCU AI Conference and Training Summit back in April, bringing together AI researchers, students, educators, and industry leaders from across the country. This gathering focused on building inclusive pathways in artificial intelligence, offering interactive workshops, recruiter engagement, and a platform for collaboration among HBCUs, community colleges, and major tech firms.

    We call on Microsoft, Salesforce, Nvidia, Coursera, Anthropic, and other major EdTech firms to go beyond surface partnerships. HBCUs are fertile ground for workforce development, AI research, and inclusive tech talent pipelines. Tech companies should invest in labs, curriculum development, student fellowships, and cloud infrastructure, especially at HBCUs without R1 status or multi-million-dollar endowments.

    A Framework for Action Across HBCUs

    To operate AI within the HBCU context, a few strategic steps can guide implementation:

    1. AI Capacity Building Across Faculty and Staff

    Workshops, certification programs, and summer institutes can train faculty to integrate AI into pedagogy, advising, and operations. Staff training can ensure AI tools support, not replace, relational student support.

    2. Student Engagement Through Research and Internships

    HBCUs can establish AI learning hubs where students gain real-world experience developing or auditing algorithms, especially those designed for educational equity.

    3. AI Governance

    Every HBCU adopting AI must also build frameworks for data privacy, transparency, and bias prevention. As institutions historically rooted in justice, HBCUs can lead the national conversation on ethical AI.

    4. Regional and Consortial Collaboration

    HBCUs can pool resources to co-purchase AI tools, share grant writers, and build regional research centers. Joint proposals to federal agencies and tech firms will yield greater impact.

    5. AI in Strategic Planning and Accreditation

    Institutions should embed AI as a theme in Quality Enhancement Plans (QEPs), Title III initiatives, and enrollment management strategies. AI should not be a novelty, it should be a core driver of sustainability and innovation.

    Reclaiming the Future

    HBCUs were built to meet an unmet need in American education. They responded to exclusion with excellence. They turned marginalization into momentum. Today, they can do it again, this time with algorithms, neural networks, and digital dashboards.

    But this moment calls for bold leadership. We must go beyond curiosity and into strategy. We must demand resources, form coalitions, and prepare our institutions not just to use AI, but to shape it.

    Let them define what culturally competent, mission-driven artificial intelligence looks like in real life, not in theory. 

    And to the Gates Foundation, Intel, OpenAI, Amazon, and all who believe in the transformative power of education: invest in HBCUs. Not as charity, but as the smartest, most impactful decision you can make for the future of American innovation.

    Because when HBCUs lead, communities rise. And with AI in our hands, the next 
    level of excellence is well within reach.

    Dr. Emmanuel Lalande currently serves as Vice President for Enrollment and Student Success and Special Assistant to the President at Voorhees University.

     

     

    Source link

  • Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Source link

  • Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Source link