Tag: ethics

  • Can VR Teach Students Ethics?

    Can VR Teach Students Ethics?

    Virtual reality courses have become more common, thanks to the development of new classroom applications for the software and the increased affordability of VR and augmented reality technology for institutions. A 2025 survey of chief technology officers by Inside Higher Ed and Hanover Research found that 14 percent of respondents said their institution has made meaningful investments in virtual reality and immersive learning.

    Past research shows that VR activities benefit student learning by making the classroom more engaging and encouraging creative and entrepreneurial thinking.

    A group of faculty at Pepperdine University in California adapted virtual reality content to teach undergraduates about ethical systems in a practical and applied setting.

    Their research study, published in the Journal of Business and Technical Communication, showed that students who used VR in a case study had a heightened emotional response to the material, which clouded their ability to provide a measured analysis. By comparison, students who watched a straight video about the same case not only expressed empathy for the subjects but also maintained a clear view of their situation.

    How it works: The research study evaluated student learning over the course of two semesters in 2023. Students were presented with three variations of a case study related to the Malibu Community Labor Exchange, a nonprofit organization that helps day laborers and individuals without housing secure work. Students read a news article and watched a VR video or watched a standard video about the lives of workers at the MCLE, which provides a variety of opportunities for individuals in the Los Angeles region. Some watched both VR and a standard video.

    Course content focused primarily on the workers, their personal lives, their role in addressing wildfires in Malibu and the risks they face in fighting fires.

    After watching the materials, students had to connect the ethical questions presented about MCLE’s mission and workers’ conditions with a previously taught lesson about ethicists and their ethical systems, as well as write a recommendation for the organization.

    Faculty reviewed students’ responses to identify whether they exhibited appropriate reasoning about ethical systems and whether their recommendations reflected their ability to interpret the content.

    The takeaways: In their reflections, students underscored the way videos exposed them to someone else’s circumstances and realities, saying the content felt very authentic. But those who used VR were more likely to say the format was distracting than those who saw only videos.

    Students who watched the standard video said it helped them expand their understanding of the organization, its members and the context of the work in an emotional and logical way. They wrote that they felt empathetic and had a richer sense of the work being done.

    “The video was very raw. It didn’t glamorize or have fantastic editing. It showed us exactly what it is like for these workers,” one student wrote.

    For some students, the VR video was more powerful because it was more “shocking and realistic than seeing the video in normal format,” one course participant wrote. Instructors noted students were almost too personally affected by the first-person vantage point to talk about the organization and the ethical systems from an objective or factual perspective.

    Students who watched only the VR were also more likely to conflate the experience with reality, calling it a “true view” instead of a representation or interpretation of events; students who watched a standard video as well as the VR version had a more balanced perspective.

    Based on their findings, researchers suggest that using both standard and VR videos that require students to reflect, analyze and recommend solutions can increase students’ “practical wisdom,” or balancing cognition and emotion for ethical action, as researchers defined it.

    “Rather than assuming that students know how to critically evaluate visual messages and their emotions, we need to intentionally teach students how to develop visual literacy and practical wisdom, especially by using VR video,” researchers wrote in the article.

    Source link

  • Gaza, higher education, and the ethics of institutional neutrality

    Gaza, higher education, and the ethics of institutional neutrality

    When I published my academic article Witnessing Silence: The Palestinian Genocide, Institutional Complicity, and the Politics of Knowledge in June this year, I shared it on LinkedIn expecting it might quietly circulate among those already engaging with Palestine and decolonial education.

    Instead, what followed was an unexpectedly wide response – emails, messages, and private conversations from academics and professional services staff across the sector, expressing that the piece gave language to something they had been living with but unable to name.

    Where the original piece offered a theoretically grounded, autoethnographic account of institutional complicity and epistemic violence in UK higher education, this is a direct reflection on what that silence means in practice: for those of us who work within universities, support students, write policy, and try to teach with integrity in times of crisis.

    This is not a neutral topic. Nor, I believe, should it be. But it is one that demands clarity, care, and honesty about what our sector chooses to say – or not say – when faced with the mass killing of civilians, including thousands of children. It also demands that we reckon with how our silences function, who they serve, and who they leave behind.

    What is the silence we’re talking about?

    Since October 2023, higher education institutions in the UK have issued few, if any, direct statements on the situation in Gaza. Where communications have been made, they have been strikingly general: references to “ongoing events in the Middle East,” or “the situation in Israel and Gaza.” In many cases, even the word “Palestine” is omitted altogether.

    This is not simply a matter of tone. Language signals recognition, and its absence is felt. In the same period, UK universities have published clear and immediate statements on the war in Ukraine, the Christchurch mosque attacks, and the murder of George Floyd. These responses were swift and specific, naming both the nature of the violence and the communities affected.

    By contrast, when it comes to Gaza, where, as of April 2025, the Palestinian Central Bureau of Statistics reported that 17,954 children killed, 39,384 children orphaned, and 7,065 children injured, many with life-changing disabilities most institutions have chosen vagueness or silence.

    The use of the term “genocide” is not a personal flourish. It has been raised by international human rights organisations such as Amnesty International, by UN experts, and by legal scholars. It is also under formal consideration at the International Court of Justice, which in January 2024 issued provisional measures recognising a plausible risk of genocide in Gaza. To avoid naming this, or to replace it with neutral euphemisms, is not caution. It is abandonment.

    I do not assume that this silence stems from indifference. In many cases, it reflects complex pressures: reputational risk, external scrutiny, internal disagreement, legal advice. But intention does not cancel out impact. And the cumulative impact of this silence is a deepening sense that Palestinian suffering is institutionally unrecognisable: too controversial to name, too politically fraught to mourn, too inconvenient to address.

    How silence affects minoritised staff and students

    The consequences of silence are not theoretical; they are lived. For many Muslim, Arab, and pro-Palestinian staff and students, the ongoing refusal to acknowledge what is happening in Gaza has created a climate of anxiety, exhaustion, and quiet despair. What I describe in my research as “moral injury” – the psychological toll of witnessing profound injustice while being expected to remain silent – has become, for many, a defining feature of daily academic life.

    I’ve heard this from colleagues across roles and disciplines: early career researchers who self-censor in lectures and grant proposals, students too afraid to name Palestine in their dissertations, and professional services staff torn between personal conviction and institutional messaging. Some have received formal warnings; others speak only in private, fearful of reputational damage or being labelled as disruptive. The burden of caution is not equally distributed.

    These are not isolated feelings. For many colleagues and friends, this silence also carries an unbearable weight: the knowledge that our lives are treated as less valuable and more easily dispensable. Conflicts in Iraq, Afghanistan, Yemen, Gaza, and Syria have taken millions of lives, yet they rarely provoke the same sustained outrage or mobilisation that far smaller losses elsewhere receive – a phenomenon documented by Kearns et al. (2019). To live with that awareness is haunting. And when universities, too, remain vague or silent, the omission feels less like caution and more like confirmation, that even here, in institutions that speak of justice and care, some lives – our lives – and losses are considered harder to name.

    I want to be clear: I am not accusing individuals of deliberate harm. But when institutions fail to name atrocities, when they issue statements that sidestep historical context, and when they offer wellbeing support without acknowledging what that support is for, they deepen a sense of abandonment that many minoritised staff already carry. It becomes harder to feel safe, heard, or morally aligned with the institutions we work in.

    Silence becomes censorship

    Silence in our universities is not just absence. It often comes with a cost for anyone who dares to speak. What looks like neutral restraint can be revealed, in practice, as institutional censorship.

    Since October 2023, disciplinary investigations have spread across UK campuses. A joint investigation found that at least 28 universities launched formal proceedings against students and staff over pro-Palestinian activism, involving more than a hundred people. Other reporting suggests that as many as 250 to 300 employees across the sector have been investigated or threatened with dismissal simply for expressing pro-Palestinian views.

    A HEPI report documents how encampments across UK universities, including many Russell Group members, were met with heavy institutional responses. Emails obtained by journalists also show that university security teams adopted “US-style” surveillance tactics during protests, often under pressure from their own professional networks.

    These are not isolated anecdotes. The pattern is clear. Silence is not neutral. It is often enforced. When colleagues or students raise their voices, they risk being investigated, disciplined, or even expelled. That cost is real and immediate, and it must be named.

    Ethical contradictions

    What makes the silence so disorienting is not just the absence of language, it’s the dissonance between that silence and the values our sector claims to uphold. We talk about decolonisation, inclusive pedagogy, and trauma-informed practice. We encourage students to “critically engage with systems of power,” and we celebrate academic freedom as foundational to our purpose. Yet when faced with a case of genocide – documented by international bodies, witnessed daily in the media, and devastating in its scale – many universities fall silent.

    This is not simply a question of public statements. It is a deeper ethical contradiction that permeates the day-to-day environment of higher education institutions. When staff are encouraged to design anti-racist curricula but discouraged from naming colonial violence in Palestine, the message is clear: some histories are welcome, others are not. When mental health services are promoted but cannot address the context of collective grief, the care offered feels hollow.

    None of this is new. As my article argues, the logic of institutional silence is historically patterned. Higher education has long been selective in its expressions of solidarity – often willing to speak when the political stakes are low, but cautious when they risk reputational or legal exposure. What we are seeing now is the cumulative effect of that selectivity: a moral framework that is uneven, inconsistent, and, for many, increasingly untenable.

    What can institutions do?

    If silence has consequences, then breaking it must be an intentional act. This doesn’t mean rushing to issue statements for every global tragedy. But it does require universities to reflect on the ethical frameworks guiding their public responses, especially when those responses (or omissions) disproportionately impact already marginalised groups.

    First, naming matters. Even if a university does not take a political position, it can acknowledge the reality of civilian death and collective grief. It can refer explicitly to Palestinians as a people, not just as part of a geography. It can recognise that some communities in our institutions are disproportionately affected by what is unfolding, and that they are looking to us not just for pastoral care, but for moral clarity.

    Second, policy protections must catch up with practice. Staff who speak out within the bounds of academic freedom should not face disproportionate scrutiny or reputational risk. Nor should students be penalised for engaging critically with the politics of occupation, war, or settler colonialism. Institutional support must be consistent, not selectively applied based on the political palatability of the cause.

    Finally, universities must reckon with the unequal distribution of emotional labour. Many of us who are called upon to “lead conversations” on inclusion or belonging are also the ones absorbing the silence around Palestine. That dissonance is unsustainable – and addressing it requires more than a line in a strategy document. It requires courage, consistency, and care.

    There is no perfect statement, no risk-free position. But neither is neutrality ever neutral. If we expect students and staff to bring their whole selves into our classrooms, then we must be prepared to name the losses and injustices that shape those selves—and to respond with more than silence.

    Silence is not safety

    The idea that universities must remain neutral in the face of political crisis may feel institutionally safe, but it is ethically brittle. Neutrality, when applied unevenly, is not neutrality at all. It becomes complicity, dressed up as caution.

    What makes this moment so painful for many in the sector is not just the lack of solidarity, but the sense that even the language of care has become selective. If we are truly committed to fostering inclusive, trauma-informed institutions, then we cannot exclude entire communities from the scope of our empathy. We cannot preach justice in our classrooms while avoiding it in our corridors.

    In the weeks following the article’s publication, I received messages from colleagues across the country – many from minoritised backgrounds – who described feeling both moved and afraid: seen, perhaps for the first time, but still unsure whether it was safe to speak.

    There is still time for institutions to act, not by offering perfect words, but by showing they are listening. By naming what is happening. By protecting those who speak. And by recognising that silence is not safety. For many of us, it is precisely the thing we are trying to survive.

    Source link

  • Fahmi Quadir, Adtalem, and the High-Stakes Ethics of Short-Selling

    Fahmi Quadir, Adtalem, and the High-Stakes Ethics of Short-Selling

    In the realm of Wall Street, few figures challenge the system from within quite like Fahmi Quadir. Known in financial circles as “The Assassin,” Quadir has made a name—and a mission—for herself by exposing fraud and predatory behavior in publicly traded companies. But unlike most short-sellers chasing profits on volatility, Quadir brings a moral clarity to her work, emphasizing that short-selling can be an instrument of justice when practiced with rigor, purpose, and transparency. Her recent campaign against Adtalem Global Education, a for-profit college conglomerate, underscores the power—and danger—of this approach.

    Fahmi Quadir is the founder and Chief Investment Officer of Safkhet Capital, a short-only hedge fund she launched in 2017 at the age of 26. Safkhet is not your typical Wall Street operation. Built on deep forensic research and a mission to hold corporations accountable, the firm takes bold, high-conviction positions against companies it believes are engaged in deception, exploitation, or fraud.

    Quadir’s career trajectory is as unlikely as it is impressive. She originally planned to pursue a PhD in mathematics, but a series of encounters at New York’s National Museum of Mathematics—funded by quantitative finance giants like Renaissance Technologies—introduced her to a world where market dynamics and moral imperatives could collide. She quickly realized that capital markets held not just monetary power, but the potential to drive social change. With no formal finance background, she was identified by hedge fund insiders as a natural fit for short-selling. She dove in, eventually appearing in the 2018 Netflix documentary Dirty Money, which chronicled her pivotal role in the takedown of Valeant Pharmaceuticals.

    In February 2024, Quadir spoke at Stanford’s Graduate School of Business during an event hosted by the Corporations and Society Initiative (CASI). In a conversation moderated by JD/MBA student Thomas Newcomb, she unpacked her approach to short-selling—one defined by intellectual rigor, emotional resilience, and moral conviction.

    “Short selling means you borrow shares from your bank, sell them, and hope the price drops so you can buy them back at a lower price and pocket the difference,” Quadir explained. “But prices can go up infinitely. The potential losses on a short are also infinite.”

    That risk, she emphasized, is not theoretical—it’s lived. “You need to withstand a lot of pain,” she said. “Short-selling isn’t for everyone. It’s about doing uncomfortable work, challenging popular narratives, and being willing to look like a fool—until you’re proven right.”

    And yet, in Quadir’s view, this discomfort is necessary. “Shorting is important for the functioning of our markets. It provides liquidity and price discovery. But in a tiny corner of the market, there are those of us who are using short selling as a way to expose injustice and correct bad capital market behavior.”

    Quadir focuses on companies she believes are harming customers or committing fraud, rather than chasing momentum or hype. “We avoid situations of mass delusion,” she noted, “because mass delusion can stay delusional forever.”

    Her most famous case remains the takedown of Wirecard AG, a German electronic payments firm that collapsed in 2020 amid massive accounting fraud. Safkhet’s 25% short position on Wirecard was the culmination of years of research and collaboration with whistleblowers and law enforcement. It was a textbook example of what Quadir calls “story-driven” short-selling—piecing together a company’s past to uncover the rot at its core.

    She recounted a chilling origin story involving Wirecard’s founders, Markus Braun and Jan Marsalek—who is now a confirmed Russian agent—and an Austrian billionaire with ties to adult entertainment who allegedly used intimidation tactics to force a takeover. “When that’s part of your origin story,” she said, “whatever comes after is going to be epic.”

    But Quadir’s sights have recently turned toward a different kind of fraud—one operating under the guise of education. In January 2024, Safkhet Capital released a detailed short report on Adtalem Global Education, labeling it a “toxic byproduct of an imperfect higher education system.” The report highlighted Adtalem’s dependence on federal student aid—more than 70% of its revenue—and exposed dismal outcomes at its institutions, including Walden and Chamberlain universities, both of which serve a disproportionately high number of Black and working-class women.

    The report also noted a financial responsibility score of 0.2 out of 3.0—far below the threshold used by the U.S. Department of Education to flag institutions at risk of mismanaging federal funds. In Quadir’s view, Adtalem wasn’t just financially shaky—it was “completely uninvestable.”

    The market agreed. Following Safkhet’s report, Adtalem’s stock dropped 19% in a single day, with further losses in the days that followed. The company attempted to halt trading and accused Quadir of “short and distort” tactics—a claim that fell flat. “It was very satisfying after that hold was released to see the market validate our thesis,” she said. “Their strategy backfired.”

    At Stanford, Quadir reflected on why she made the Adtalem report public: “There was an informational vacuum around this company. The shareholder base was largely passive. No one was doing the kind of research or analysis we were doing.”

    But Quadir is quick to point out that short-sellers alone cannot fix a broken system. “Nothing is going to change if there isn’t enforcement,” she said. “We need to have some high-profile cases where people go to jail. These characters continue to get away with it or settle, and what happens? Their stocks go up.”

    She remains hopeful, however, that markets—if given the right incentives—can self-correct. “I think the greatest believers in market efficiency have to be short sellers. I believe capital markets can correct bad behavior, and that benefits all of us.”

    Short-selling, when practiced ethically, is not about sabotage. It is about storytelling, investigation, and risk—a lot of risk. Quadir’s approach requires patience, emotional stamina, and intellectual courage. It is not for the faint of heart. But in a world where regulators are often captured and media attention can be fleeting, short-sellers like Quadir play an essential, if controversial, role.

    Her work against Adtalem is not just a case study in financial activism. It is a call to reexamine how markets reward failure, how federal funds prop up predatory institutions, and how silence—especially in higher education—can be bought. As Quadir puts it, “We have the power to affect change. We just have to be willing to take the hits.”

    Sources

    This article draws significantly from the February 2024 Stanford Graduate School of Business event, A Conversation with Fahmi Quadir, Wall Street’s Fearless Short Seller, hosted by the Corporations and Society Initiative (CASI). The event transcript and summary are available at https://casi.stanford.edu/news/conversation-fahmi-quadir-wall-streets-fearless-short-seller.

    Additional information was compiled from the Safkhet Capital short report on Adtalem Global Education (January 2024), publicly available statements by Adtalem Global Education, coverage of Adtalem’s stock movement by MarketWatch and Bloomberg, investigations into Wirecard by the Financial Times, and Quadir’s portrayal in the 2018 Netflix documentary Dirty Money.

    Legal responses to Safkhet’s report were also noted from Pomerantz LLP and Block & Leviton, which opened shareholder investigations into Adtalem in January 2024. Data from the U.S. Department of Education regarding Title IV funding and financial responsibility scores was used to contextualize Adtalem’s regulatory risk.

    For further background on short-selling’s role in price discovery and enforcement gaps in higher education, see related coverage in The Wall Street Journal, The Chronicle of Higher Education, and Inside Higher Ed.

    Source link

  • Misinformation Course Teaches Ethics for Engineering Students

    Misinformation Course Teaches Ethics for Engineering Students

    Nearly three in four college students say they have somewhat high or very high media literacy skills (72 percent), according to a 2025 Student Voice survey by Inside Higher Ed and Generation Lab. Students are less likely to consider their peers media literate; three in five respondents said they have at least somewhat high levels of concern about the spread of misinformation among their classmates.

    When asked how colleges and universities could help improve students’ media literacy skills, a majority of Student Voice respondents indicated they want digital resources on increasing media literacy or media literacy–related content and training embedded into the curriculum.

    A recently developed course at the University of Southern California’s Viterbi School of Engineering teaches students information literacy principles to help them develop tools to mitigate the harms of online misinformation.

    The background: USC offers an interdisciplinary teaching grant that incentivizes cross-campus collaboration and innovative teaching practices. To be eligible for the grant, applications must include at least one full-time faculty member and faculty from more than one school or division. Each grantee receives up to $20,000 to compensate for applicants’ time and work.

    In 2023, Helen Choi, a faculty member at USC Viterbi, won the interdisciplinary teaching grant in collaboration with Cari Kaurloto, head of the science and engineering library at USC Libraries, to create a media literacy course specifically for engineering students.

    “By focusing on engineering students, we were able to integrate a component of the course that addresses a social issue from an engineering perspective in terms of technical know-how and the professional ethics,” Choi said, which helps students see the relevance of course content to their personal and professional lives.

    What’s the need: Students tend to receive most of their news and information on online platforms; Student Voice data found a majority of learners rely on social media for news content (72 percent), and about one in four engage with news apps or news aggregator websites (27 percent).

    Choi and Kaurloto’s course, titled Information Literacy: Navigating Digital Misinformation, builds academic research skills, teaches information literacy principles and breaks down the social issue of online misinformation.

    “Students examine ways they can navigate online information using their research skills, and then extend that knowledge by considering how they, as prospective engineers, can build technologies that mitigate the harms of online misinformation while enhancing the information literacy of users,” Choi explained.

    USC faculty aren’t the only ones noticing a need for more education around engagement with digital information; a growing number of colleges and universities are making students complete a digital literacy course as a graduation requirement.

    In the classroom: Choi and Kaurloto co-teach the course, which was first offered in this spring to a class of 25 students.

    The students learned to develop effective search strategies and critically examine sources, as well as ethical engineering principles and how to apply them in designing social media platforms, Kaurloto said. Choi and Kaurloto employed active learning pedagogies to give students hands-on and real-life applications including writing, speaking and collaborative coursework.

    One assignment the students completed was conducting library research to develop a thesis paragraph on an information literacy topic with a short, annotated bibliography. Students also presented their research to their peers, Kaurloto said.

    Learners also engaged in a group digital literacy project, designing a public service campaign that included helpful, research-backed ways to identify misinformation, Choi said. “They then had to launch that campaign on a social media platform, measure its impact, and present on their findings.” Projects ranged from infographics on Reddit to short-form videos on spotting AI-generated misinformation and images on TikTok and Instagram.

    The impact: Student feedback said they found the course helpful, with many upper-level learners saying they wished they had taken it sooner in their academic career because of the library research skills they gained. They also indicated the course content was applicable in daily life, such as when supporting family members “who students say have fallen down a few internet rabbit holes or who tend to believe everything they see online,” Choi said.

    Other librarians have taken note of the course as a model of how to teach information literacy, Choi said.

    “We’ve found that linking information literacy with specific disciplines like engineering can be helpful both in terms of building curricula that resonate with students but also for building professional partnerships among faculty,” Choi said. “Many faculty don’t know that university librarians are also experts in information literacy—but they should!”

    This fall, Choi and Kaurloto plan to offer two sections of the course with a cap of 24 students per section. Choi hopes to see more first- and second-year engineering students in the course so they can apply these principles to their program.

    If your student success program has a unique feature or twist, we’d like to know about it. Click here to submit.

    Source link

  • We Already Have an Ethics Framework for AI (opinion)

    We Already Have an Ethics Framework for AI (opinion)

    For the third time in my career as an academic librarian, we are facing a digital revolution that is radically and rapidly transforming our information ecosystem. The first was when the internet became broadly available by virtue of browsers. The second was the emergence of Web 2.0 with mobile and social media. The third—and current—results from the increasing ubiquity of AI, especially generative AI.

    Once again, I am hearing a combination of fear-based thinking alongside a rhetoric of inevitability and scoldings directed at those critics who are portrayed as “resistant to change” by AI proponents. I wish I were hearing more voices advocating for the benefits of specific uses of AI alongside clearheaded acknowledgment of risks of AI in specific circumstances and an emphasis on risk mitigation. Academics should approach AI as a tool for specific interventions and then assess the ethics of those interventions.

    Caution is warranted. The burden of building trust should be on the AI developers and corporations. While Web 2.0 delivered on its promise of a more interactive, collaborative experience on the web that centered user-generated content, the fulfillment of that promise was not without societal costs.

    In retrospect, Web 2.0 arguably fails to meet the basic standard of beneficence. It is implicated in the global rise of authoritarianism, in the undermining of truth as a value, in promoting both polarization and extremism, in degrading the quality of our attention and thinking, in a growing and serious mental health crisis, and in the spread of an epidemic of loneliness. The information technology sector has earned our deep skepticism. We should do everything in our power to learn from the mistakes of our past and do what we can to prevent similar outcomes in the future.

    We need to develop an ethical framework for assessing uses of new information technology—and specifically AI—that can guide individuals and institutions as they consider employing, promoting and licensing these tools for various functions. There are two main factors about AI that complicate ethical analysis. The first is that an interaction with AI frequently continues past the initial user-AI transaction; information from that transaction can become part of the system’s training set. Secondly, there is often a significant lack of transparency about what the AI model is doing under the surface, making it difficult to assess. We should demand as much transparency as possible from tool providers.

    Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements.

    The Belmont Report articulates three primary ethical principles:

    1. Respect for persons
    2. Beneficence
    3. Justice

    “Respect for persons,” as it’s been translated into U.S. code and practiced by IRBs, has several facets, including autonomy, informed consent and privacy. Autonomy means that individuals should have the power to control their engagement and should not be coerced to engage. Informed consent requires that people should have clear information so that they understand what they are consenting to. Privacy means a person should have control and choice about how their personal information is collected, stored, used and shared.

    Following are some questions we might ask to assess whether a particular AI intervention honors autonomy.

    • Is it obvious to users that they are interacting with AI? This becomes increasingly important as AI is integrated into other tools.
    • Is it obvious when something was generated by AI?
    • Can users control how their information is harvested by AI, or is the only option to not use the tool?
    • Can users access essential services without engaging with AI? If not, that may be coercive.
    • Can users control how information they produce is used by AI? This includes whether their content is used to train AI models.
    • Is there a risk of overreliance, especially if there are design elements that encourage psychological dependency? From an educational perspective, is using an AI tool for a particular purpose likely to prevent users from learning foundational skills so that they become dependent on the model?

    In relation to informed consent, is the information provided about what the model is doing both sufficient and in a form that a person who is neither a lawyer nor a technology developer can understand? It is imperative that users be given information about what data is going to be collected from which sources and what will happen to that data.

    Privacy infringement happens either when someone’s personal data is revealed or used in an unintended way or when information thought private is correctly inferred. When there is sufficient data and computing power, re-identification of research subjects is a danger. Given that “de-identification of data” is one of the most common strategies for risk mitigation in human subjects’ research, and there is an increasing emphasis on publishing data sets for the purposes of research reproducibility, this is an area of ethical concern that demands attention. Privacy emphasizes that individuals should have control over their private information, but how that private information is used should also be assessed in relation to the second major principle—beneficence.

    Beneficence is the general principle that says that the benefits should outweigh the risks of harm and that risks should be mitigated as much as possible. Beneficence should be assessed on multiple levels—both the individual and the systemic. The principle of beneficence demands that we pay particularly careful attention to those who are vulnerable because they lack full autonomy, such as minors.

    Even when making personal decisions, we need to think about potential systemic harms. For example, some vendors offer tools that allow researchers to share their personal information in order to generate highly personalized search results—increasing research efficiency. As the tool builds a picture of the researcher, it will presumably continue to refine results with the goal of not showing things that it does not believe are useful to the researcher. This may benefit the individual researcher. However, on a systemic level, if such practices become ubiquitous, will the boundaries between various discourses harden? Will researchers doing similar scholarship get shown an increasingly narrow view of the world, focused on research and outlooks that are similar to each other, while researchers in a different discourse are shown a separate view of the world? If so, would this disempower interdisciplinary or radically novel research or exacerbate disciplinary confirmation bias? Can such risks be mitigated? We need to develop a habit of thinking about potential impacts beyond the individual in order to create mitigations.

    There are many potential benefits to certain uses of AI. There are real possibilities it can rapidly advance medicine and science—see, for example, the stunning successes of the protein structure database AlphaFold. There are corresponding potentialities for swift advances in technology that can serve the common good, including in our fight against the climate crisis. The potential benefits are transformative, and a good ethical framework should encourage them. The principle of beneficence does not demand that there are no risks, but that we should identify uses where the benefits are significant and that we mitigate the risks, both individual and systemic. Risks can be minimized by improving the tools, such as work to prevent them from hallucinating, propagating toxic or misleading content, or delivering inappropriate advice.

    Questions of beneficence also require attention to environmental impacts of generative AI models. Because the models require vast amounts of computing power and, therefore, electricity, using them taxes our collective infrastructure and contributes to pollution. When analyzing a particular use through the ethical lens of beneficence, we should ask whether the proposed use provides enough likely benefit to justify the environmental harm. Use of AI for trivial purposes arguably fails the test for beneficence.

    The principle of justice demands that the people and populations who bear the risks should also receive the benefits. With AI, there are significant equity concerns. For example, generative AI may be trained on data that includes our biases, both current and historic. Models must be rigorously tested to see if they create prejudicial or misleading content. Similarly, AI tools should be closely interrogated to ensure that they do not work better for some groups than for others. Inequities impact the calculations of beneficence and, depending on the stakes of the use case, could make the use unethical.

    Another consideration in relation to the principle of justice and AI is the issue of fair compensation and attribution. It is important that AI does not undermine creative economies. Additionally, scholars are important content producers, and the academic coin of the realm is citations. Content creators have a right to expect that their work will be used with integrity, will be cited and that they will be remunerated appropriately. As part of autonomy, content creators should also be able to control whether their material is used in a training set, and this should, at least going forward, be part of author negotiations. Similarly, the use of AI tools in research should be cited in the scholarly product; we need to develop standards about what is appropriate to include in methodology sections and citations, and possibly when an AI model should be granted co-authorial status.

    The principles outlined above from the Belmont Report are, I believe, sufficiently flexible to allow for further and rapid developments in the field. Academia has a long history of using them as guidance to make ethical assessments. They give us a shared foundation from which we can ethically promote the use of AI to be of benefit to the world while simultaneously avoiding the types of harms that can poison the promise.

    Gwendolyn Reece is the director of research, teaching and learning at American University’s library and a former chair of American’s institutional review board.

    Source link

  • Mind the policy gaps: regulating quality and ethics in digitalised and privatised crossborder education

    Mind the policy gaps: regulating quality and ethics in digitalised and privatised crossborder education

    by Hans de Wit, Tessa DeLaquil, Ellen Hazelkorn and Hamish Coates

    Hans de Wit, Ellen Hazelkorn and Hamish Coates are editors and Tessa DeLaquil is associate editor of Policy Reviews in Higher Education. This blog is based on their editorial for issue 1, 2025.

    Transnational education (TNE), also referred to as crossborder education, is growing and morphing in all kinds of interesting ways which, while exciting for innovators, surface important policy, regulatory, quality and ethical concerns. It is therefore vital that these developments do not slip around or through policy gaps. This is especially true for on-line TNE which is less visible than traditional campus-based higher education. Thus, it is vital that governments take the necessary actions to regulate and quality assure such education and training expansion and to inform the sector and broader public. Correspondingly, there is a pressing need for more policy research into the massive transformations shaking global higher education.

    TNE and its online variants have been part of international higher education for a few decades. As Coates, Xie, and Hong (2020) foreshadowed, it has seen a rapid increase after the Covid-19 pandemic. In recent years, TNE operations have grown and diversified substantially. Wilkins and Huisman (2025) identify eleven types of TNE providers and propose the following definition to help handle this diversity: ‘Transnational education is a form of education that borrows or transfers elements of one country’s higher education, as well as that country’s culture and values, to another country.’

    International collaboration and networking have never been more important than at this time of geopolitical and geoeconomic disruption and a decline in multilateral mechanisms. But TNE’s expansion is matched by growing risks.

    International student mobility at risk

    International degree student mobility (when students pursue a bachelor, master and/or doctoral degree abroad) continues to be dominant, with over six million students studying abroad, double the number of 10 years ago. It is anticipated that this number will further increase in the coming decade to over 8 million, but its growth is decreasing, and its geographical path from the ‘global south’ to the ‘global north’ is shifting towards a more diverse direction. Geopolitical and nationalist forces as well as concerns about adequate academic services (accommodation in particular) in high-income countries in the global north are recent factors in the slowing down of the growth in student mobility to Australia, North America and Europe, the leading destinations. The increased availability and quality of higher education, primarily at the undergraduate level, in middle-income countries in Asia, Latin America and parts of the Middle East, also shape the decrease in student mobility towards the global north.

    Several ‘sending countries’, for instance, China, South Korea and Turkey, are also becoming receiving countries. Countries like Kazakhstan, Uzbekistan, Ukraine (until the Russian invasion), Egypt and some of the Caribbean countries have also become study destinations for students from neighbouring low-income countries. These countries provide them with higher education and other forms of postsecondary education sometimes in their public sector but mostly in private institutions and by foreign providers.

    An alternative TNE model?

    Given the increased competition for international students and the resulting risks of falling numbers and related financial security for universities, TNE has emerged as an alternative source of revenue. According to Ilieva and Tsiligiris (2023), United Kingdom TNE topped more than 530,000 students in 2021. In the same year, its higher education institutions attracted approximately 680,000 international students. It is likely that TNE will surpass inward student mobility.

     As the United Kingdom case makes clear, TNE originally was primarily a ‘north-south’ phenomenon, in which universities from high-income and mostly Anglophone countries, offered degree programmes through branch campuses, franchise operations and articulation programmes. Asia was the recipient region of most TNE arrangements, followed by the Middle East. As in student mobility, TNE is more diverse globally both in provision and in reception.

    The big trend in TNE is the shift to online education with limited in-person teaching. A (2024) report of Studyportals found over 15,000 English-taught online programmes globally. And although 92 per cent of these programmes are supplied by the four big Anglophone countries – the United Kingdom, United States, Canada and Australia – the number of programmes offered outside those four doubled since 2019 from 623–1212, primarily in Business and Management, Computer Sciences and IT.

    Private higher education institutions

    This global growth in online delivery of education goes hand in hand with the growth in various forms of private higher education. Over 50% of the institutions of higher education and over one-third of global enrolment are in private institutions, many of which are commercial in nature. Private higher education has become the dominant growth area in higher education, as a result of the lack of funding for public higher education as well as traditional HE’s sluggish response to diverse learner needs. Although most private higher education, in particular for-profit, is taking place in the global south, it is also present in high-income countries, and one can see a rise in private higher education recently in Western Europe, for instance, Germany and France.

    TNE is often a commercial activity. It is increasingly a way for public universities to support international and other operations as public funding wanes. Most for-profit private higher education targets particular fields and education services and tends to be more online than in person. There is an array of ownership and institutional structures, involving a range of players.

    Establishing regulations and standards

    TNE, especially online TNE, is likely to become the major form of international delivery of education for local and international students especially where growing demand cannot be met domestically. Growth is also increasingly motivated by an institution’s or country’s financial challenges or strategic priorities – situations that are likely to intensify. This shift could help overcome some of the inequities associated with mobility and address concerns associated with climate change but online TNE is significantly more difficult to regulate.

    A concerning feature of the global TNE market is how learners and countries can easily become victims. Fraud is associated with the exponential rise in the number of fake colleges and accreditors, and document falsification. This is partly due to different conceptions and regulatory approaches to accreditation/QA of TNE and the absence of trustworthy information. Indeed, the deficiency in comprehensive and accessible information is partly responsible for on-going interest in and use of global rankings as a proxy for quality.

    A need for clearer and stronger TNE and online quality assurance

    The trend in growth of private for-profit higher education, TNE and online delivery is clear and given its growing presence requires more policy attention by national, regional and global agencies. As mentioned, public universities are increasingly active in TNE and online education targeting countries and learners underserved in their home countries whilst  looking for other sources of income as a result of decreasing public support and other factors.

    The Global Convention on the Recognition of Qualifications makes clear the importance of ensuring there are no differences in quality or standards between learners in the home or host country regardless of whether the delivery of education programmes and learning activities is undertaken in a formal, non-formal or informal setting, in face-to-face, virtual or hybrid formats, traditional or non-traditional modes. Accordingly, there are growing concerns about insufficient regulation and the multilateral framework covering international education, and especially online TNE.

    In response, there is a need for clearer and stronger accreditation/quality assurance and standards by national regulators, regional networks and organisations such as UNESCO, INQAAHE, the International Association of Universities (IAU) with regards to public and private involvement in TNE, and online education. This is an emerging frontier for tertiary education, and much more research is required on this growing phenomenon.

    Professor Ellen Hazelkorn is Joint Managing Partner, BH Associates. She is Professor Emeritus, Technological University Dublin.

    Hamish Coates is professor of public policy, director of the Higher Education Futures Lab, and global tertiary education expert.

    Hans de Wit is Professor Emeritus and Distinguished Fellow of the Boston College Center for International Higher Education, Senior Fellow of the international Association of Universities.

    Tessa DeLaquil is postdoctoral research fellow at the School of Education at University College Dublin.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link