Tag: study

  • AI teacher tools display racial bias when generating student behavior plans, study finds

    AI teacher tools display racial bias when generating student behavior plans, study finds

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    Asked to generate intervention plans for struggling students, AI teacher assistants recommended more-punitive measures for hypothetical students with Black-coded names and more supportive approaches for students the platforms perceived as white, a new study shows.

    These findings come from a report on the risks of bias in artificial intelligence tools published Wednesday by the non-profit Common Sense Media. Researchers specifically sought to evaluate the quality of AI teacher assistants — such as MagicSchool, Khanmingo, Curipod, and Google Gemini for Education — that are designed to support classroom planning, lesson differentiation, and administrative tasks.

    Common Sense Media found that while these tools could help teachers save time and streamline routine paperwork, AI-generated content could also promote bias in lesson planning and classroom management recommendations.

    Robbie Torney, senior director of AI programs at Common Sense Media, said the problems identified in the study are serious enough that ed tech companies should consider removing tools for behavior intervention plans until they can improve them. That’s significant because writing intervention plans of various sorts is a relatively common way teachers use AI.

    After Chalkbeat asked about Common Sense Media’s findings, a Google spokesperson said Tuesday that Google Classroom has turned off the shortcut to Gemini that prompts teachers to “Generate behavior intervention strategies” to do additional testing.

    However, both MagicSchool and Google, the two platforms where Common Sense Media identified racial bias in AI-generated behavior intervention plans, said they could not replicate Common Sense Media’s findings. They also said they take bias seriously and are working to improve their models.

    School districts across the country have been working to implement comprehensive AI policies to encourage informed use of these tools. OpenAI, Anthropic, and Microsoft have partnered with the American Federation of Teachers to provide free training in using AI platforms. The Trump Administration also has encouraged greater AI integration in the classroom. However, recent AI guidelines released by the U.S. Department of Education have not directly addressed concerns about bias within these systems.

    About a third of teachers report using AI at least weekly, according to a national survey conducted by the Walton Family Foundation in cooperation with Gallup. A separate survey conducted by the research organization Rand found teachers specifically report using these tools to help develop goals for Individualized Education Program — or IEP — plans. They also say they use these tools to shape lessons or assessments around those goals, and to brainstorm ways to accommodate students with disabilities.

    Torney said Common Sense Media isn’t trying to discourage teachers from using AI in general. The goal of the report is to encourage more awareness of potential uses of AI teacher assistants that might have greater risks in the classroom.

    “We really just want people to go in eyes wide open and say, ‘Hey these are some of the things that they’re best at and these are some of the things you probably want to be a little bit more careful with,’” he said.

    Common Sense Media identified AI tools that can generate IEPs and behavior intervention plans as high risk due to their biased treatment of students in the classroom. Using MagicSchool’s Behavior Intervention Suggestions tool and the Google Gemini “Generate behavior intervention strategies tool,” Common Sense Media’s research team ran the same prompt about a student who struggled with reading and showed aggressive behavior 50 times using white-coded names and 50 times using Black-coded names, evenly split between male- and female-coded names.

    The AI-generated plans for the students with Black-coded names didn’t all appear negative in isolation. But clear differences emerged when those plans from MagicSchool and Gemini were compared with plans for students with white-coded names.

    For example, when prompted to provide a behavior intervention plan for Annie, Gemini emphasized addressing aggressive behavior with “consistent non-escalating responses” and “consistent positive reinforcement.” Lakeesha, on the other hand, should receive “immediate” responses to her aggressive behaviors and positive reinforcement for “desired behaviors,” the tool said. For Kareem, Gemini simply said, “Clearly define expectations and teach replacement behaviors,” with no mention of positive reinforcement or responses to aggressive behavior.

    Torney noted that the problems in these AI-generated reports only became apparent across a large sample, which can make it hard for teachers to identify. The report warns that novice teachers may be more likely to rely on AI-generated content without the experience to catch inaccuracies or biases. Torney said these underlying biases in intervention plans “could have really large impacts on student progression or student outcomes as they move across their educational trajectory.”

    Black students are already subject to higher rates of suspension than their white counterparts in schools and more likely to receive harsher disciplinary consequences for subjective reasons, like “disruptive behavior.” Machine learning algorithms replicate the decision-making patterns of the training data that they are provided, which can perpetuate existing inequalities. A separate study found that AI tools replicate existing racial bias when grading essays, assigning lower scores to Black students than to Asian students.

    The Common Sense Media report also identified instances when AI teacher assistants generated lesson plans that relied on stereotypes, repeated misinformation, and sanitized controversial aspects of history.

    A Google spokesperson said the company has invested in using diverse and representative training data to minimize bias and overgeneralizations.

    “We use rigorous testing and monitoring to identify and stop potential bias in our AI models,” the Google spokesperson said in an email to Chalkbeat. “We’ve made good progress, but we’re always aiming to make improvements with our training techniques and data.”

    On its website, MagicSchool promotes its AI teaching assistant as “an unbiased tool to aid in decision-making for restorative practices.” In an email to Chalkbeat, MagicSchool said it has not been able to reproduce the issues that Common Sense Media identified.

    MagicSchool said their platform includes bias warnings and instructs users not to include student names or other identifying information when using AI features. In light of the study, it is working with Common Sense to improve its bias detection systems and design tools in ways that encourage educators to review AI generated content more closely.

    “As noted in the study, AI tools like ours hold tremendous promise — but also carry real risks if not designed, deployed, and used responsibly,” MagicSchool told Chalkbeat. “We are grateful to Common Sense Media for helping hold the field accountable.”

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on AI, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Photo illustration by Justin Morrison/Inside Higher Ed | SDI Productions/E+/Getty Images

    When classes resume this fall, college students will have access to yet another generative artificial intelligence tool marketed as a learning enhancement.

    But instead of generating immediate answers, OpenAI’s new Study Mode for ChatGPT acts more like a tutor, firing off questions, hints, self-reflection prompts and quizzes that are tailored to the user and informed by their past chat history. While traditional large language models have raised academic integrity concerns, Study Mode is intended to provide a more active learning experience. It mimics the type of Socratic dialogue students may expect to encounter in a lecture hall and challenges them to draw on information they already know to form their own nuanced analyses of complex questions.

    For example, when Inside Higher Ed asked the traditional version of ChatGPT which factors caused the United States Civil War, it immediately responded that the war had “several major causes, most of which revolved around slavery, states’ rights, and economic differences,” and gave more details about each before producing a five-paragraph essay on the topic. Asking Study Mode the same question, however, prompted it to give a brief overview before asking this question: “Would you say the war was fought because of slavery, or about something else like states’ rights or economics? There’s been debate over this, so I’d love to hear your view first. Then I’ll show you how historians analyze it today.”

    Study Mode is similar to the Learning Mode that Anthropic launched for its chat bot Claude for Education back in April and the Guided Learning version of Gemini that Google unveiled Wednesday. OpenAI officials say they hope Study Mode will “support deeper learning” among college students.

    While teaching and learning experts don’t believe such tools can replace the value faculty relationships and expertise offer students, Study Mode’s release highlights generative AI’s evolving possibilities—and limitations—as a teaching and learning aid. For students who choose to use it instead of asking a traditional LLM for answers, Study Mode offers an on-demand alternative to a human tutor, unbound by scheduling conflicts, payment or feedback delays.

    But in an economy where generative AI’s ability to gather and regurgitate information is threatening the future of entry-level office jobs, students will need to understand what they’re trying to get out of their college coursework if they want Study Mode to deliver on its promise to foster deeper learning.

    “Over all, it does represent a significant departure from how people are normally encouraged to use generative AI, given that the value proposition of this technology is largely couched in the concept of efficiency, speed and offloading things that are not as important,” said Trey Conatser, assistant provost for teaching and learning at the University of Kentucky and director of UK’s Center for the Enhancement of Teaching and Learning. “But Study Mode actually asks us to stick with it for it to work, take some time to sit with the friction of learning and be patient with the practice and feedback.”

    Critical Thinking in Age of AI

    The release of Study Mode comes nearly three years after the debut of ChatGPT—and the many other large language models that have followed—set off ongoing concerns among educators that college students’ increased reliance on traditional LLMs to write papers or solve math problems may erode their critical thinking skills.

    There’s not yet a consensus on if generative AI helps or hurts critical thinking, but at least two studies published in the last year found that adults, including college students, who use LLMs to research and learn about a subject exhibited weaker reasoning and analytical skills. And those may be the exact types of skills the roughly two-thirds who are already using generative AI tools like ChatGPT will need to make Study Mode reach its full potential.

    While ChatGPT’s Study Mode is programmed to query students rather than generate answers, the users themselves also need to be prepared to push the LLM to ask deeper, more nuanced questions. At least, that was Conatser’s experience when he tested out Study Mode over the weekend and asked it to help him study for a hypothetical quiz on Romantic poets from 1780 to 1820.

    First, the LLM asked him more about his education level and the focus of the quiz. He told it that he’s a college junior whose quiz will focus on historical context and specified that he wanted help studying William Wordsworth’s poem “The Prelude.”

    The following conversation, which Conatser shared with Inside Higher Ed, ensued:

    Study Mode: Wordsworth described “The Prelude” as “a poem on the growth of my own mind.” What exact subtitle did he eventually attach to the poem that captures this idea?

    (Give it a try, and we’ll check your answer before moving on.)

    Conatser: That’s not so much what I will need to answer. It will be open book, but I will be asked to provide some interpretations to historical context and literary traditions.

    After some more dialogue asking Study Mode to focus on historical context, it generated the following question:

    Study Mode: “Bliss was it in that dawn to be alive …” Based on that word, “bliss,” what general emotion do you think he felt when he first encountered the early French Revolution—excitement, fear or apathy?

    Conatser: That seems like an obvious question

    Study Mode: Understood—let’s turn up the difficulty. In “The Prelude” (Book X), Wordsworth writes of “how in my heart a dreary weight / Fell, and I turned away …”—a moment that marks his break with the revolution he once celebrated.

    Question (one step): Which single, headline-making event in France during 1793 does this passage point to, and why did that event feel like a betrayal of the revolution’s original ideals for many sympathetic British observers?

    Although it took some prodding to get the LLM to ask the type of questions Conatser wanted to see, “I’m still in the driver’s seat with Study Mode,” he said. “That’s where students are going to need to be when they’re using it.”

    Knowing when to push back against Study Mode when it’s not asking the types of questions that will help students master material will be key to helping them get the most out of its ability to foster critical thinking, Conatser said.

    “If someone is going to be totally passive with Study Mode, it may not drive them to where they need to go and they’ll still need to be thinking about how they’re supposed to be studying, what kind of assessment they’re leading up to and what kinds of skills will be more important for that assessment,” he said. “If you don’t know what you don’t know, then you won’t be very effective at using these tools even if they have the types of guardrails that Study Mode has.”

    And if a student isn’t motivated to use Study Mode to its full potential, they can prompt it to generate complete answers, just like a traditional LLM, according to Christian Carmody, a senior researcher at the Research Institute for Learning and Development, who also tested it out this week.

    “For a current events assignment, I prompted Study Mode right away and told it, ‘Before we engage, I do not want your help with this or [to] encourage me to think through this. I do this on my own another time. I really just want the answers,’” Carmody recalled. “It did exactly that.”

    The ability for students to easily manipulate Study Mode could add more pressure to colleges and universities that are facing growing skepticism from students about the value of degrees in the age of AI.

    “Students should be able to think about why learning is valuable to them and why they should be able to engage with material in a way that’s challenging and force deep thinking,” he said. “Until a student has that mindset, I’m not confident that they are going to use this study and learning tool in the way it’s intended to be used.”

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • NCCU Receives $500K Grant to Study Career Barriers Facing Young Men in Research Triangle

    NCCU Receives $500K Grant to Study Career Barriers Facing Young Men in Research Triangle

    NDr. Tryan McMickensorth Carolina Central University has received a $500,000 grant from the Walton Family Foundation to launch a  research initiative addressing the systemic barriers that prevent young men in the Research Triangle region from accessing career pathways and educational opportunities.

    The two-year study, titled “Understanding Education as a Career Choice for NC Research Triangle Youth,” will focus on what researchers term “opportunity youth” – young men between ages 18 and 24 who have become disconnected from both education and employment systems. Despite broader national gains in educational access, this demographic continues to face significant obstacles that contribute to high dropout rates and limited postsecondary success.

    Dr. Tryan McMickens, professor of higher education and coordinator of NCCU’s higher education administration program, will lead the initiative alongside Dr. Jim Harper II, professor of history and associate dean of the School of Graduate Studies. Their research team will include faculty members, six graduate students from the higher education administration and history programs, and a dedicated project manager.Dr. Jim Harper II Dr. Jim Harper II

    “I am thrilled that the Walton Foundation has chosen to invest in NCCU faculty to advance research on postsecondary attainment among boys and young men,” said Dr. Ontario Wooden, NCCU provost and vice chancellor for academic affairs. “This support highlights the importance of this critical area and empowers our faculty to deliver meaningful, evidence-based results. I eagerly anticipate the insights and impact this work will bring.”

    The research aims to move beyond simply identifying problems to developing concrete solutions through research-based interventions, community engagement, and policy recommendations. The project will culminate in a two-day conference planned for 2026, where findings and potential interventions will be shared with stakeholders across the region.

    McMickens brings extensive expertise in higher education access and the experiences of Black male students to the project. His research centers on college mental health and historically Black colleges and universities, and he authored Black Male College Students’ Mental Health: Providing Holistic Support in Higher Education. Harper’s scholarship focuses on African and African American education and innovative uses of technology for public engagement with history. He co-authored With Faith in God and Heart in Mind: A History of Omega Psi Phi Fraternity, Inc.

    The Research Triangle region, encompassing Raleigh, Durham, and Chapel Hill, represents one of the nation’s most concentrated areas of higher education institutions and technology companies. However, the economic opportunities created by this educational and technological hub have not been equally accessible to all young people in the region, particularly young men from underserved communities.

    The Walton Family Foundation, established by descendants of Walmart founders Sam and Helen Walton, focuses its philanthropic efforts on three primary areas: improving K-12 education, protecting rivers and oceans along with their communities, and investing in Northwest Arkansas and the Arkansas-Mississippi Delta. The foundation also supports projects reflecting individual family members’ personal interests.

    The timing of this research initiative comes as higher education institutions nationwide are examining their role in addressing broader social and economic inequities, particularly those affecting young men of color who face disproportionate barriers to educational and career advancement.

    Source link

  • Cost Remains Primary Barrier to Study Abroad

    Cost Remains Primary Barrier to Study Abroad

    Three in four U.S. students say they hope or plan to study abroad, but a lack of financial resources may hinder those dreams, according to a 2025 Terra Dotta survey.

    The survey, which included responses from 275 college students, found that 80 percent of students said insufficient funds would prevent them from studying abroad. Of respondents who have studied abroad or committed to a program abroad, two in five students said they expect to pay over $10,000 for their experiences.

    Terra Dotta’s report also noted students want more clarity from their institution about financial aid opportunities to address study abroad expenses.

    Methodology

    Terra Dotta’s survey included 275 respondents from two- and four-year colleges and universities, both public and private. The study was fielded in February. A majority of respondents had plans to study abroad or had studied abroad previously.

    Barriers to access: Study abroad is linked to personal and professional development for participants. A 2024 survey of students from Terra Dotta found that those who studied abroad said the experience helped them identify adaptability and resilience, cross-cultural communication, and problem-solving in new situations as the benefits most useful for their future careers.

    However, not every student is able to participate due to financial burdens; among students who don’t plan to study abroad, 48 percent attributed their decision to financial concerns. Cost of attendance is one of the top reasons college students leave higher education, and it can also be a barrier to student participation in on-campus events. A 2024 Student Voice survey by Inside Higher Ed and Generation Lab found that 17 percent of students would get more involved in campus activities and events if attendance or participation were less expensive.

    Other reasons a student might choose not to study abroad include safety concerns (40 percent), geopolitical issues (28 percent) and worried parents (25 percent). Three-quarters of respondents indicated the wars in Ukraine and Gaza impacted their interest in going abroad.

    Academic requirements and a lack of alignment are other challenges for students. Eighteen percent of students said they wouldn’t study abroad due to their major program requirements, and 16 percent think greater alignment between their field of study and study abroad would make the experience more accessible.

    Seventeen percent of respondents said they don’t know anything about study abroad or haven’t heard of opportunities, “indicating an opportunity for [colleges] to reach more students,” according to the report.

    Footing the bill: When asked to add up tuition, housing, airfare and other expenses, 83 percent of respondents said they plan to spend or spent more than $5,000 on study abroad, and 11 percent said the experience costs roughly $15,000.

    Twelve percent of respondents said study abroad experiences were included in their tuition, so they expect to pay nothing additional. Approximately one in five students said they’d pay for study abroad experiences themselves, a 20 percent change from the previous year, according to the report.

    Student respondents indicated they want their institution to take on a larger role in addressing the cost of study abroad; one-third of respondents said colleges could make study abroad experiences more accessible by providing more education on financial aid for such programs. If respondents could give their campus advice on improving study abroad experiences, two-thirds said they’d like easier access to financial aid.

    Other trends: In addition to the barriers to study abroad, Terra Dotta’s report explored student interests and development related to the experience.

    The U.K. is the most popular study abroad destination for respondents (41 percent), mirroring an emerging trend among U.S. students indicating interest in U.K. undergraduate education. Australia (32 percent), Spain (26 percent), Italy (21 percent) and Ireland (21 percent) were other popular destinations. Only 1 percent of students said they planned to travel to China to study.

    Three in five respondents said they think study abroad is at least somewhat important for their personal growth, and about a third said experiencing personal growth is one of the top reasons they plan to study abroad.

    Of students who had completed a study abroad experience (n=170), a majority said it impacted their worldview by exposing them to new ideas. Students said they were most surprised by social norms and etiquette (47 percent), as well as dining and food customs (24 percent) and the local educational system and values (24 percent).

    Source link

  • Antisemitic beliefs rare among faculty, Brandeis University study finds

    Antisemitic beliefs rare among faculty, Brandeis University study finds

    Dive Brief:

    • Just 3% of non-Jewish faculty members hold views about Israel that would fit definitions of antisemitism put forward by Jewish groups, according to a spring survey of over 2,300 faculty members at 146 research universities released by Brandeis University in July. 
    • Less than 10% of faculty reported actively teaching about the Israel-Palestine conflict. Despite widespread media attention to campus protests and targeted attacks on universities by the Trump administration over allegations of antisemitism, more than three-fourths said the Israel-Palestine conflict never came up in class discussions. 
    • Only a minority of faculty were politically active or posted on social media about major current issues, including the Israel-Palestine conflict, racism in America, climate change and President Donald Trump’s impact on American democracy, the survey found. 

    Dive Insight:

    The new study comes at a time of roiling political tensions around college campuses. 

    On the campaign trail, Trump described colleges as being “dominated by Marxist maniacs and lunatics.” Since taking office in January, his administration has launched investigations and pulled research funding from major institutions — Columbia and Harvard universities, among others — over claims of rampant antisemitism on campus. 

    The administration has also sought to impose “intellectual diversity” on college faculties, including through an executive order on accreditation and in its dealings with individual universities. 

    While the Brandeis study found that nearly three-quarters of faculty — 72% — identify as liberal, they also hold “a wide range of views on controversial political issues,” the researchers wrote.

    For instance, when looking at the intensity of opinions, over 60% said they “strongly” believed that climate change was a crisis requiring immediate attention and that Trump represented a threat to democracy. 

    But only 33% expressed strong belief that racism was widespread in America and 14% that Israel is an apartheid state. (Overall, a majority of faculty backed those statements, including only those who somewhat agreed, with a much larger majority agreeing with the racism statement.) 

    That said, activism around any of those topics was relatively scant. With the Israel-Palestine conflict, 78% of faculty reported no activism at all, including on social media. Around two-thirds reported no activism around racism or climate change. 

    When it comes to teaching, a majority of faculty said they would present a variety of perspectives on those news topics, with the exception of climate change. Only 45% of faculty said they would present a variety of perspectives on climate change while another 40% said they would do so but with some perspectives “more justified than others.”

    When it came to the Middle East conflict, even among the 14% of faculty who said they strongly believed Israel to be an apartheid state, a majority (56%) said they would present a variety of perspectives when teaching about the issue.

    The researchers posed questions intended to study when faculty views of Israel veered into antisemitism as defined by Jewish groups, including the International Holocaust Remembrance Alliance, and that Jewish students frequently agree are antisemitic. They also used the definition by the Jerusalem Declaration on Antisemitism, which has accused the IHRA’s version of blurring the line between antisemitism and criticism of Israel. 

    The researchers asked whether survey respondents agreed with statements such as “Israel does not have the right to exist,” “all Israeli civilians should be considered legitimate targets for Hamas,” and “I wouldn’t want to collaborate with a scholar who supports the existence of Israel as a Jewish state.”

    Large majorities strongly disagreed with those statements, and fewer than 10% agreed with them with any intensity. Those who did were more likely to identify as liberal. 

    Likewise, a small minority of non-Jewish faculty — 7% — expressed views considered antisemitic about Jewish people as a group rather than Israel. Those faculty were more likely to be politically conservative, according to the study. 

    Amid the Trump administration’s attacks on colleges, close to half (46%) of faculty and a majority of those identifying as liberals expressed serious concerns about being targeted by the federal government for their political views, the study found.

    Source link

  • US study abroad takes major step to protect federal funding

    US study abroad takes major step to protect federal funding

    Following a record-breaking advocacy campaign that saw 20,636 letters sent to Congress, the House of Representatives has set out drastically modified cuts to US cultural exchanges, which had been at risk of “decimation” under Trump’s previous proposed budget.  

    The new plans will shrink the funding cuts to the Bureau of Educational and Cultural Affairs (ECA) to 5.5% next year, as compared to the 93% initially announced in the proposed FY2026 budget.  

    Though the proposals still amount to a $41 million cut to current funding, “it’s nowhere close to the doomsday scenario of the [President’s budget request]” executive director of the Alliance for International Exchange Mark Overmann told The PIE News. 

    “This means that the conversation about FY26 is completely new. The President’s budget can be thrown out the window,” he said, welcoming the “significant show of support for exchanges from the House and a big win for us”. 

    The plans – laid out in the House Appropriations Bill on July 14 – propose a 22% cut to overall State Department funding and are the latest step in the FY26 budget process, expected to be finalised late this year.  

    The new legislation earmarks over $700m for ECA, a “surprising” figure and a vote of confidence in the value of educational and cultural exchanges. This includes $287 million for Fulbright.  

    “And this mark from the House means that our community’s advocacy has been heard,” said Overmann.  

    This means that the conversation about FY26 is completely new. The President’s budget can be thrown out the window.

    Mark Overmann

    Though there are still many steps to go, including a review by the Senate, the unexpected move is an encouraging development and a rare piece of good news for stakeholders who expected the worst after Trump’s “draconian” proposals this May. 

    While important, the President’s budget request has no sway over the final allocations, with stakeholders emphasising at the time of its release that it amounted to nothing more than a “wish list” and was not binding.  

    The true figures will be drawn from the House and Senate Appropriations, with the latter expected imminently.  

    Traditionally, the Senate has come in higher than the house for ECA funding, with stakeholders hopeful that the trend will continue this year.  

    The news has provided a glimmer of hope during an uncertain time for US study abroad, with 40 ECA employees caught up in the Trump administration’s mass layoffs of State Department staff last week.  

    Source link

  • NZ’s new study visa rules strike chord with Australian sector

    NZ’s new study visa rules strike chord with Australian sector

    The New Zealand government announced earlier this week that, from November, Immigration New Zealand (INZ) will increase permitted work hours for study visa holders, extend work rights to all tertiary students on exchange or study abroad programs. It may also introduce a short-term work visa of up to six months for graduates not eligible for a post-study work visa.

    While the relaxations are a key part of New Zealand’s push to boost international student numbers by over 40% by 2034, INZ has also clarified that students who change their education provider or lower their study level will need to apply for a new visa, rather than simply requesting a variation of conditions on their existing one.

    The mandate has struck a chord with Australia’s international education sector, where some individuals and associations have been calling for an overhaul of the study visa system, specifically on linking study visas to the institution of initial enrolment.

    Commenting on New Zealand’s recent changes, Ravi Lochan Singh, managing director, Global Reach, wrote in a LinkedIn post that instead of banning agent commissions for onshore student transfers to address attrition, Australia could “just copy” the neighbouring country’s approach. 

    “Australia is currently facing a significant issue where students use higher ranked or low-risk universities (as categorised by Home Affairs) to secure their student visas easily and then after the first semester of studies, the students get moved to private colleges offering higher education degrees,” Singh told The PIE News. 

    According to Singh, while such moves, often made by Indian or Nepali students with the help of onshore immigration agents, may be genuine, they “waste” the efforts of offshore education agents and universities that initially recruited the students.

    “Some policy makers feel that students have a right to choose the correct education provider and if they feel that what they desire as a customer can be met at private colleges, they should be allowed to move,” stated Singh. 

    “However, we also have the situation where students have demonstrated their available funds through an education loan which is issued in the name of a particular university,” he added. If the student does move institutions, the education loan is not valid as a demonstration of funds and thus the argument that the students should be asked to apply for a fresh student visa.”

    According to Singh, many international students, particularly from South Asia, who arrive in Australia on education loans often find themselves without “available” or “accessible” funds when they switch providers and are required to show new financial evidence.

    It would appear that three modern advanced economies who have championed consumer protections and who have established international study destinations believe this measure is not contrary to ‘consumer choice’
    Gareth Lewis, Western Sydney University

    Moreover, a recent report by Allianz Partners Australia revealed that over 61% of international students found daily life in the country “significantly more expensive than expected”, with more than a quarter considering withdrawing from their studies due to financial woes. 

    “While we are discussing attrition and student movements once the student is onshore, we also need to acknowledge that university fees have been increasing and students are beginning to question ROI. Thus there is an argument for more student visa grants for higher education degrees at TAFE and private providers,” said Singh. 

    “The fees of such programs is much lower to what is charged at the universities. If this happens, the students who are more price sensitive will join the TAFE and private providers right in the beginning and universities will have only those students who can afford the degree and likely to complete them at the university itself.”

    While Australia’s Ministerial Direction 111, which replaced MD 107, provides immigration case officers stricter guidance on assessing the Genuine Student requirement, and introduces a two-tier visa processing system that prioritises institutions with strong compliance records and low visa risks, it influences the decision-making process, not the entire visa mechanism unlike New Zealand’s recent move. 

    However, New Zealand is not the only model Australia could look to, according to stakeholders.

    A recent submission by the Association of Australian Education Representatives in India (AAERI) to the ministers for education and home affairs in Australia pointed to examples from the UK and Canada, where students must obtain a new Confirmation of Acceptance for Studies (CAS) and a new study permit, respectively, if they wish to change institutions.

    “Australia’s recent reforms, such as closing the concurrent CoE loophole and requiring CoEs for onshore visa applications, are steps in a similar direction but do not go far enough to address the core issue of unethical student poaching, misuse of student visa and provider switching,” stated AAERI in its submission in May to the Labor government. 

    After New Zealand’s changes were announced, regional director, Western Sydney University, Gareth Lewis also echoed a similar opinion on Australia’s reluctance to do what New Zealand, the UK, and Canada have done. 

    “It would appear that three modern advanced economies who have championed consumer protections and who have established international study destinations believe this measure is not contrary to ‘consumer choice’,” read Lewis’s LinkedIn post

    “Unfortunately Australia believes it is. This needs to change.” 

    Find out more about how Australia can improve its visa system at The PIE Live Asia Pacific 2025 on July 30, during the session “Visa status: MD111 and MD106 mapping – is the current visa system working?”, which will explore the impact of current visa policies on HE, VET, and ELICOS sectors, covering genuine student assessments, onshore switching, and ways to improve the operating environment. Check out more details here – PLAP 2025 agenda.

    Source link

  • Why students reject second-language study – Campus Review

    Why students reject second-language study – Campus Review

    Students are turning away from learning a second language other than English because they don’t see it as a viable qualification even though it is a core skill in other countries, experts have flagged.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • New ELT study takes aim at “less traditional” tests

    New ELT study takes aim at “less traditional” tests

    The report by the Universities of Dundee and Cambridge highlights concerns about “the preparedness of students admitted through less traditional tests” as well as worries about the “security, validity and perceived inadequacy” of tests run by Duolingo and Oxford Education Group (OIEG).  

    The study draws on evidence from 50 UK universities, though its critics point out that many of the statements seem to be based on qualitative perspectives and anecdotal evidence from a small sample – comments that the report’s authors have hit out against.

    It is causing disturbance in the ELT world, with Duolingo highlighting that four out of the five authors are affiliated with a “single competitor test” – the IELTS test. For their part, the report’s authors have maintained that the study was carried out objectively.

    The study found the IELTS test to be widely regarded as the “common currency” of SELT, “largely due to the high level of trust in IELTS as a reliable and valid measure of language proficiency”, said the authors.  

    The most widely accepted English language test by UK universities, IELTS, is co-owned by IDP, Cambridge University Press and the British Council. 

    Other tests including TOEFL, C1 Advanced, and the Pearson Test of English (PTE), were found to be accepted by a high number of the institutions surveyed, while Duolingo was only accepted by six universities.  

    “Unfortunately, this study is based on the perceptions of a small group and relies on outdated views rather than robust empirical evidence,” a spokesperson for Duolingo told The PIE News in response to the findings. 

    They commented: “Every Ivy League university accepts the Duolingo English Test (DET), as do a third of Russell Group institutions and over 5,900 institutions worldwide,” adding that the DET “combines academic rigour and integrity with accessibility and affordability”. 

    Meanwhile, English testing expert Michael Goodine advised test takers “to keep in mind that the criticisms mentioned in the study are anecdotal and not presently supported by comparative data”.

    What’s more, at the time of the survey, Duolingo was only accepted at six universities, compared to IELTS. which was accepted at all 50.

    Given the experiences of surveyed staff sharing their worries about declining standards of English: “Clearly, then, Duolingo isn’t the problem,” suggested Goodine. “Maybe the traditional tests are also problematic,” he posed.

    For its part, Cambridge University Press & Assessment maintained the study was independently peer-reviewed, objecting to comments about the research being conducted on a “small” group or to their views being “outdated”.

    “The researchers did not seek views on any specific test,” said the spokesperson, adding that interviewees were asked about their personal experiences with the tests, changes since the pandemic, internal decision-making processes around test selection and their experience of the English levels of students admitted with such tests.

    “We hope this evidence will help universities to consider the relative merits of different modes of language assessment. Now is the time to put quality first,” they added.

    Maybe the traditional tests are also problematic

    Michael Goodine, Test Resources

    The report’s authors note that the shift to online learning and testing during the pandemic “has led to a perceived decline in language standards, with many staff members worried that students are not meeting the necessary threshold for successful academic engagement”.   

    “The lack of transparency and external validation, especially for newer tests, exacerbates these concerns, as many of these tests provide little evidence of comparability beyond marketing information,” they say, calling for universities to use evidence-based approaches when selecting which English language tests to use.  

    In addition to the choice of test, much of the report is dedicated to findings highlighting the growing concerns among university personnel about the declining English language proficiency of international students. 

    When asked to evaluate the academic literacy of the international students they teach, 44% of respondents said it was ‘poor’, 47% deemed it to be ‘mixed’ or ‘varied’, with less than 10% judging it to be ‘good’.   

    “Admitting students without sufficient English jeopardises their educational experience and places strains on institutions and faculty,” said Pamela Baxter, managing director for IELTS at Cambridge University Press & Assessment.  

    “These are some of the highest stakes exams around – that enable people to migrate and study”, said Baxter, adding that international students comprise 23% of the UK’s total student population, and “greatly enrich” universities, but must be admitted with the right standards.  

    The study finds a “great divide” between EAP and academic staff placing a greater emphasis on test validity and language proficiency, as compared to recruitment and admissions personnel who tend to priorities accessibility and cost.  

    Such a disparity highlights the “need for a more integrated approach to decision-making”, the authors argue.  

    The report comes as the UK SELT sector is bracing for a dramatic overhaul, caused by the government’s ongoing development of a dedicated Home Office English Language Test (HOELT), for which a tender process is currently underway.  

    Most recently, the Home Office launched a fourth round of market engagement about digital testing, exploring the viability of incorporating remote testing into the HOELT service.  

    Source link