Category: Artificial Intelligence

  • It’s Time to Embrace AI Literacy for Kids – The 74

    It’s Time to Embrace AI Literacy for Kids – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Artificial intelligence has become an incredibly polarizing topic, with one side eager to integrate it into every aspect of life and the other side running from it as fast as they can.  Is this new technology an existential threat or a transformational opportunity? According to Pew research from September, “Americans are more concerned than excited” about the proliferation of AI and want to exert more control over its use.

    About 62% of U.S. adults report interacting with AI several times a week, and adults and children alike engage on a regular basis with AI without even realizing it. Children are growing up in a world where this technology is unquestionably a part of daily life, shaping their lives in ways no one can yet fully understand. Giving them a clearer understanding of how AI works has never been more important.

    This fall, the three of us met at an event at the National Children’s Museum which brought together technology leaders, museum educators, policymakers, teachers and academic researchers focused on guiding our kids safely and productively into our technology-driven world.

    Our key takeaway? Regardless of where you stand on this issue, a common ground must be forged now. Constructive dialogue must happen, and it needs voices from both sides to produce a healthy outcome for our children. Helping kids understand AI means being both optimistic and cautious, recognizing its promise while acknowledging its shortcomings and risks.

    What if, alongside helping our youngest learn to use AI, we placed greater emphasis on teaching them how it works? By nurturing children’s critical thinking skills, we give them the power to understand it as a tool—where it can augment human effort, and where it fails miserably.

    AI is ushering in a new wave of innovation, but it is also enabling new forms of deception and manipulation. It provides access to a wealth of knowledge and opportunities, but the resulting information overload can undermine learning, cognition, creativity and human connection.

    Society as a whole, from educational institutions to policy makers to parents at the dinner table, need to invest in children’s AI literacy now. In doing so, we can instill some of the most important lessons: how to be creative and discerning in the world in which we live, preparing them for a future full of new opportunities.

    According to the World Economic Forum’s Future of Jobs Report, employers expect that 39% of workers’ core skills will change by 2030, with technological skills gaining importance most rapidly. AI will open up new fields of biomedical research. It will help us feed our growing global population. But it will also force many of us to rethink our jobs and educational pathways.

    So, on a global scale, an investment in our children’s AI literacy not only ensures a competitive workforce but also safeguards national prosperity, security and the responsible use of powerful technologies. Whether you think AI is exciting or threatening, children must be introduced to age-appropriate concepts about it so that they can build fluency and prepare for the future.

    Another takeaway from our conversation? Adults must learn alongside — and sometimes from — our kids. As adults, we have the responsibility of fostering children’s safe use of this powerful tool. But let’s give ourselves the grace to acknowledge that we don’t understand AI either.  We didn’t grow up with it, and experts and technology leaders believe that generative AI has surpassed the understanding of its creators.

    There is a window of opportunity to bring everyone to the table. As parents, educators and lifelong learners, we need to have deeper conversations about AI — especially how it shapes children’s learning, development and daily lives. We don’t have to fully comprehend it or agree with all its intended uses; we just have to be open to talking about it and taking action. By approaching this with curiosity, we can thoughtfully consider appropriate uses and guardrails for kids—something we didn’t do early enough when America’s children first began using online tools like social media.

    There are organizations starting to address AI literacy and technology education for families. Sesame Street and Google collaborated to release a series on the healthy use of digital technology. Common Sense Media, with support from the National Parents Union and EDSAFE AI, has a series of lessons about digital citizenship and AI arranged by grade level and a resource for parents as well. The website Children and Screens provides research-based articles, podcasts and other resources to help parents navigate age-appropriate technology use. Children’s museums are developing hands-on, screen-free experiences to help demystify the processes underlying AI. There needs to be more of this, supporting children’s understanding of the fundamentals, not just how to use its applications.

    AI’s purpose is not to replace human life, but to enhance it. Yet, the current conversation — especially around children’s use of AI — is too passive, treating these complex systems as inevitable rather than intentional creations. Educators, industry leaders and policymakers need to insist on a richer, more engaging dialogue about how it shapes kids’ learning, choices and experiences. 

    Whether it’s the weather report from a smart device or personalized help from a chatbot, AI literacy is now essential for young people to navigate civic life. No matter your viewpoint, it is time to embrace AI literacy. The stakes are too high for anything less than universal, active participation in preparing children for the world they’re inheriting and will soon lead.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Why This Teddy Bear Was Canceled – The 74

    Why This Teddy Bear Was Canceled – The 74


    According to a report, an AI-powered teddy bear, called Kumma, from FoloToys, delivered ‘potentially dangerous information” and “mature topics.’



    Source link

  • Can robot therapists really help you sort out your problems?

    Can robot therapists really help you sort out your problems?

    Young people are turning to AI-based chatbots for immediate and inconspicuous support for mental health challenges. But its use for therapy is fraught with ethical and regulatory conundrums.

    Dr. Fan Yang, assistant professor in the School of Social Work at the University of Illinois Urbana-Champaign said that AI can be very efficient. “It’s available anytime,” Yang said. “It can let people talk with the machine without thinking about stigma.”

    In one ongoing project, the Addictions and Concurrent Disorders Research Group at the University of British Columbia surveyed 423 university-aged recent ChatGPT users. Rishika Daswani, clinical research assistant for the study, said that they found that just over half had used the app for mental health support.

    “When comparing it to traditional support, a lot of our respondents said that it was actually similar, and a small but significant portion of people noted its superiority,” Daswani said.

    In the wake of financial barriers and long waitlists for in-person care, AI-based mental health apps and chatbots are well-intentioned to provide interim support during this gap, said Dr. Bryanna Moore, assistant professor in the Department of Health Humanities and Bioethics at the University of Rochester.

    There’s an app for that.

    Still, a study led by Yang and colleagues in the Journal of Medical Internet Research mHealth and uHealth showed that high-quality apps still carried financial barriers to access through subscription or one-time fees.

    “In the future, we need to be careful about the word ‘availability’,” Yang said. “We can distinguish technological availability versus financial availability.”

    Daswani said the most common drawback of AI use for therapy identified by participants in the group’s study was that AI lacked emotional tone and depth. While a therapist might challenge one’s thoughts and help them reflect critically, chatbots tend to regurgitate information and act as echo chambers to reinforce pre-existing beliefs, Daswani said.

    Moore described AI therapy as sycophantic. “They are designed to draw you in to keep you clicking and engaged for as long as possible,” Moore said. “The responses they give are meant to make you feel good or seen or validated.”

    Loneliness and social isolation are among the root causes of many mental health issues for which young people use chatbots for support.

    “I don’t think it’s a leap to say that for some people connecting with a therapy bot or an online persona, [it could] promote the development of coping skills, but for others, it could really erode that,” Moore said.

    When children turn to AI therapists

    While most of the discussion around AI use for therapy has been centered around adults, Moore said specific considerations need to be taken into account for young children and adolescents.

    “Children are developmentally, morally, socially and legally distinct from adults,” Moore said. “The use of AI-based apps for mental health care by children and adolescents might impact their social and cognitive development in ways that it doesn’t for adults.”

    Childhood and adolescence are pivotal times for cementing how someone understands what it means to have friendships or relationships and learns to pick up on social and emotional cues. Chatbots often fail to fully understand a child within the context of this environment, Moore said.

    “Especially when it comes to things like mental health care, the environmental stressors on the child are central to understanding how their symptoms are presenting and identifying effective avenues of intervention,” Moore said.

    Therapeutic interventions usually involve shared decision-making with the child, caregiver and clinician to fully explore the benefits, risks and alternatives of each option. However, mental health apps can short-circuit these essential conversations, Moore said.

    Putting trust in technology

    In their survey of 27 mental health apps, Yang and colleagues identified several user design concerns for a youth target audience.

    Many apps featured dark colors and attained low readability scores, with an average sixth-grade reading level for in-app content and ninth-grade reading level for app store descriptions. While all apps were based on text, Yang said including non-text formats would make the apps more youth-friendly, especially for non-English speakers.

    Daswani cautioned that while AI may seem to have lowered the barrier for access to mental health care, it may be slow to gain acceptance in communities with low institutional trust in technology and authority.

    “Western language has specific emotional frameworks which may not fully capture other cultures’ ways of expressing distress,” Daswani said. “If AI tools don’t recognize these culturally encoded expressions, then you have a risk of misunderstanding and your needs not being met.”

    Moore and other experts worry that the reliance on AI for mental health support could perpetuate the pervasive notion that mental illness is something one deals with on their own.

    “If it’s as simple as downloading and jumping on an app once a day or once a week, there’s this idea that the barriers to having good mental health are gone,” Moore said.

    The value of human interaction

    The reliance could normalize turning to technology as the best, easiest and most appropriate avenue for support when someone is struggling. “I don’t think there’s anything inherently good or bad about the technology,” Moore said. “My big worry is, will it become a substitute for also seeking out meaningful human interactions and developing those skills and coping mechanisms?”

    If these chatbots are truly treatments, they must be subject to the same regulations that other treatments are subject to, said Moore, but for now, there is a lack of regulations and clear guidelines about who is responsible for assuming the risks involved in using AI for therapy.

    “It’s just such an unregulated space, and I think placing the responsibility on children, adolescents, parents and caregivers, and even individual clinicians to navigate this quagmire is really unfair,” Moore said.

    In the study by Yang and colleagues, many of the apps lacked detailed privacy policies, aside from the baseline information provided on the app store. How the apps handle personal data and information about traumatic experiences was not explicitly stated.

    It is also currently unclear how best to integrate these apps into clinical practice. Moore said a logical starting point is for clinicians to ask patients about their digital intake and understand how much time they are spending on these apps.

    Daswani said that integrating AI literacy into mental health education can help people understand the benefits and limitations of these apps. “We’re not saying that it’s to replace a therapist,” Daswani said. “But that doesn’t mean that we want to discredit it completely.”

    What’s needed now, Yang said, is to improve the quality of the apps. “So hopefully one day we can have human-centered treatment plans for people, with AI being some supplemental treatment support,” Yang said.


    Questions to consider:

    1. What is an advantage of a therapy app?

    2. What are some concerns health professionals have about children relying on AI therapy?

    3. Why might you feel more comfortable talking to a digital tool than a human?

    Source link

  • The debate over AI in education is stuck. Let’s move it forward in responsible ways that truly serve students

    The debate over AI in education is stuck. Let’s move it forward in responsible ways that truly serve students

    by Maddy Sims, The Hechinger Report
    January 29, 2026

    Artificial intelligence is already reshaping how we work, communicate and create. In education, however, the conversation is stuck.

    Sensational headlines make it seem like AI will either save public education (“AI will magically give teachers back hours in their day!”) or destroy it completely (“Students only use AI to cheat!” “AI will replace teachers!”).

    These dueling narratives dominate public debate as state and district leaders scramble to write policies, field vendor pitches and decide whether to ban or embrace tools that often feel disconnected from what teachers and students actually experience in classrooms.

    What gets lost is the fundamental question of what learning should look like in a world in which AI is everywhere. And that is why, last year, rather than debate whether AI belongs in schools, approximately 40 policymakers and sector leaders took stock of the roadblocks in an education system designed for a different era and wrestled with what it would take to move forward responsibly.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    The group included educators, researchers, funders, parent advocates and technology experts and was convened by the Center on Reinventing Public Education. What emerged from the three-day forum was a clearer picture of where the field is stuck and a shared recognition of how common assumptions are holding leaders back and of what a more coherent, human-centered approach to AI could look like.

    We agreed that there are several persistent myths derailing conversations about AI in education, and came up with shifts for combating them.

    Myth #1: AI’s biggest value is saving time for teachers

    Teachers are overburdened, and many AI tools promise relief through faster lesson planning, automated grading or instant feedback. These uses matter, but forum participants were clear that efficiency alone will not transform education.

    Focusing too narrowly on time savings risks locking schools more tightly into systems that were never designed to prepare students for the world they are graduating into.

    The deeper issue isn’t how to use AI to save time. It’s how to create a shared vision for what high-quality, future-ready learning should actually look like. Without that clarity, even the best tools quietly reinforce the same factory-model structures educators are already struggling against.

    The shift: Stop asking what AI can automate. Start asking what kinds of learning experiences students deserve, and how AI might help make those possible.

    Myth #2: The main challenge is getting the right AI tools into classrooms

    The education technology market is already crowded, and AI has only added to the noise. Teachers are often left stitching together core curricula, supplemental programs, tutoring services and now AI tools with little guidance.

    Forum participants pushed back on the idea that better tools alone will solve this problem. The real challenge, they argued, is to align how learning is designed and experienced in schools — and the policies meant to support that work — with the skills students need to thrive in an AI-shaped world. An app is not a learning model. A collection of tools does not add up to a strategy.

    Yet this is not only a supply-side problem. Educators, policymakers and funders have struggled to clearly articulate what they need amid a rapidly advancing technology environment.

    The shift: Define coherent learning models first. Evaluate AI tools based on whether they reinforce shared goals and integrate with one another to support consistent teaching and learning practices, not whether they are novel or efficient on their own.

    Myth #3: Leaders must choose between fixing today’s schools and inventing new models

    One of the tensions dominating the discussions was whether scarce state, local and philanthropic resources should be used to improve existing schools or to build entirely new models of learning.

    Some participants worried that using AI to personalize lessons or improve tutoring simply props up systems that no longer work. Others emphasized the moral urgency of improving conditions for students in classrooms right now.

    Rather than resolving this debate, participants rejected the false choice. They argued for an “ambidextrous” approach: improving teaching and learning in the present while intentionally laying the groundwork for fundamentally different models in the future.

    The shift: Leaders must ensure they do not lose sight of today’s students or of tomorrow’s possibilities. Wherever possible, near-term pilot programs should help build knowledge about broader redesign.

    Myth #4: AI strategy is mainly a technical or regulatory challenge

    Many states and districts have focused AI efforts on acceptable-use policies. Creating guardrails certainly matters, but when compliance eclipses learning and redesign, it creates a chilling effect, and educators don’t feel safe to experiment.

    The shift: Policy should build in flexibility for learning and iteration in service of new models, not just act as a brake pedal to combat bad behavior.

    Myth #5: AI threatens the human core of education

    Perhaps the most powerful reframing the group came up with: The real risk isn’t that AI will replace human relationships in schools. It’s that education will fail to define and protect what is most human.

    Participants consistently emphasized belonging, purpose, creativity, critical thinking and connection as essential outcomes in an AI-shaped world.

    But they will be fostered only if human-centered design is intentional, not assumed.

    The shift: If AI use doesn’t support students’ connections between their learning, their lives and their futures, it won’t be transformative, no matter how advanced the technology.

    The group’s participants did not produce a single blueprint for the future of education, but they came away with a shared recognition that efficiency won’t be enough, tools alone won’t save us and fear won’t guide the field.

    Related: In a year that shook the foundations of education research, these 10 stories resonated in 2025

    The question is no longer whether AI will shape education. It is whether educators, communities and policymakers will look past the headlines and seize this moment to shape AI’s role in ways that truly serve students now and in the future.

    Maddy Sims is a senior fellow at the Center on Reinventing Public Education (CRPE), where she leads projects focused on studying and strengthening innovation in education.

    Contact the opinion editor at [email protected].

    This story about AI in education was produced byThe Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-ai-education-responsible-ways-serve-students/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=114551&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-ai-education-responsible-ways-serve-students/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • What Education Leaders Can Learn from the AI Gold Rush – The 74

    What Education Leaders Can Learn from the AI Gold Rush – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Every week, my 7-year-old brings home worksheets with math problems and writing assignments. But what captivates me is what he creates on the back once the assigned work is done: power-ups for imaginary games, superheroes with elaborate backstories, landscapes that evolve weekly. He exists in a beautiful state of discovery and joy, in the chrysalis before transformation.

    My son shows me it’s possible to discover something remarkable when we expand what we consider possible. Yet in education, a system with 73% public dissatisfaction and just 35% satisfied with K-12 quality, we hit walls repeatedly.

    This inertia contributes to our current moment: steep declines in reading and math proficiency since 2019, one in eight teaching positions unfilled or filled by uncertified teachers, and growing numbers abandoning public education.

    Contrast this with artificial intelligence’s current trajectory.

    AI faces massive uncertainty. Nobody knows where it leads or which approaches will prove most valuable. Ethical questions around bias, privacy and accountability remain unresolved.

    Yet despite uncertainty — or because of it — nearly every industry is doubling down. Four major tech firms planned $315 billion in AI spending for 2025 alone. AI adoption surged from 55% to 78% of organizations in one year, with 86% of employers expecting AI to transform their businesses by 2030.

    This is a gold rush. Entire ecosystems are seeing transformational potential and refusing to be left behind. Organizations invest not despite uncertainty, but because standing still carries greater risk.

    There’s much we can learn from the AI-fueled momentum.

    To be clear, this isn’t an argument about AI’s merits. This is a conversation about what becomes possible when people come together around shared aspirations to restore hope, agency and possibility to education. AI’s approach reveals five guiding principles that education leaders should follow:

    1. Set a Bold Vision: AI leaders speak in radical terms. Education needs such bold aspirations, not five percent improvements. Talk about 100% access, 100% thriving, 100% success. Young people are leading by demanding approaches that honoring their agency, desire for belonging, and broad aspirations. We need to follow their lead.

    2. Play the Long Game: Companies make massive investments for transformation they may not see for years. Education must embrace the same long-term thinking: investing in teacher development programs that mature over years, reimagining curricula for students’ distant futures, building systems that support sustainable excellence over immediate political wins.

    3. Don’t Fear Mistakes: AI adoption is rife with failure and course corrections. Despite rapid belief and investment, over 80% of AI projects fail. Yet companies continue experimenting, learning, adjusting and trying again because they understand that innovation requires iteration. Education must take bold swings, have honest debriefs when things fall flat, adjust and move forward.

    4. Democratize Access: AI reached 1.7 to 1.8 billion users globally in 2025. While quality varies and significant disparities exist, fundamental access has been opened up in ways that seemed impossible just years ago. When it comes to transformative change in education, every child deserves high-quality teachers, engaging curriculum and flourishing environments.

    5. Own the Story, and Pass the Mic: Every day, AI gains new ambassadors among everyday people, inspiring others to jump in. The most powerful education stories come from young people discovering breakthroughs during light bulb moments, from parents seeing children thrive, from teachers witnessing walls coming down and possibilities surpassing imagination. We need to pass the mic, creating platforms for students to share what meaningful learning looks like, which will unlock aspirational stories that shift the system.

    None of this is possible without student engagement. When students have voice and agency, believe in learning’s relevance and feel supported, transformative outcomes follow. As CEO of Our Turn, I was privileged to be part of efforts that inspired leaders and institutions across the country to invest in student engagement as a core strategy. We’re now seeing progress: all eight measures of school engagement tracked by Gallup reached their highest levels in 2025. This is an opportunity to build positive momentum; research consistently demonstrates engagement relates to academic achievement, post-secondary readiness, critical thinking, persistence and enhanced mental health.

    Student engagement is the foundation from which all other educational outcomes flow. When we center student voice, we go from improving schools to galvanizing the next generation of engaged citizens and leaders our democracy desperately needs.

    High-quality teachers are also essential. Over 365,000 positions are filled by uncertified teachers, with 45,500 unfilled. Teachers earn 26.4% less than similarly educated professionals. About 90% of vacancies result from teachers leaving due to low salaries, difficult conditions or inadequate support.

    Programs like Philadelphia’s City Teaching Alliance prove what’s possible: over 90% of new teachers returned after 2023-24, versus just under 80% citywide. We must create conditions where teaching is sustainable and honored through higher salaries, better working conditions, meaningful professional development and cultures that value educators as professionals.

    Investing in teacher quality is fundamental to workforce development, economic competitiveness and ensuring every child has access to excellent instruction. When we frame this as both a moral imperative and an economic necessity, we create the coalition necessary for lasting change.

    Finally, transformation must focus on skill development. The workforce young people are entering demands more than technical knowledge; it requires integrated capabilities for navigating complexity, building authentic relationships and creating meaningful change.

    At Harmonious Leadership, we’ve worked with foundations and organizations to develop leadership skills that result in greater innovation and impact. Our goals: young people more engaged in school and communities, and companies reporting greater levels of innovation, impact and financial sustainability.

    The appeal here is undeniable. Workforce development consistently ranks among the top priorities across political divides. Given the rapid rate of change in our culture and economy, we need to develop skills for careers that don’t yet exist, for challenges we can’t yet imagine, for a world that demands creativity, adaptability and resilience.

    The AI gold rush shows what’s possible when we set bold visions, invest for the long term, embrace learning from failure, democratize access and amplify voices closest to transformation.Our children, like my son drawing superheroes on worksheet backs, are in chrysalis moments. The choice is ours: remain paralyzed by complexity or channel the same urgency, investment and unity of purpose driving the AI revolution. We know what works: student engagement, quality teachers and future-ready skills. The question isn’t whether we have solutions. It’s whether we have courage to pursue them.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    Widely used but barely trusted: understanding student perceptions on the use of generative AI in higher education

    by Carmen Cabrera and Ruth Neville

    Generative artificial intelligence (GAI) tools are rapidly transforming how university students learn, create and engage with knowledge. Powered by techniques such as neural network algorithms, these tools generate new content, including text, tables, computer code, images, audio and video, by learning patterns from existing data. The outputs are usually characterised by their close resemblance to human-generated content. While GAI shows great promise to improve the learning experience in various disciplines, its growing uptake also raises concerns about misuse, over-reliance and more generally, its impact on the learning process. In response, multiple UK HE institutions have issued guidance outlining acceptable use and warning against breaches of academic integrity. However, discussions about the role of GAI in the HE learning process have been led mostly by educators and institutions, and less attention has been given to how students perceive and use GAI.

    Our recent study, published in Perspectives: Policy and Practice in Higher Education, helps to address this gap by bringing student perspectives into the discussion. Drawing on a survey conducted in early 2024 with 132 undergraduate students from six UK universities, the study reveals an impactful paradox. Students are using GAI tools widely, and expect their use to increase, yet fewer than 25% regard its outputs as reliable. High levels of use therefore coexist with low levels of trust.

    Using GAI without trusting it

    At first glance, the widespread use of GAI among students might be taken as a sign of growing confidence in these tools. Yet, when students are asked about their perceptions on the reliability of GAI outputs, many express disagreement when asked if GAI could be considered a reliable source of knowledge. This apparent contradiction raises the question of why are students still using tools they do not fully trust? The answer lies in the convenience of GAI. Students are not necessarily using GAI because they believe it is accurate. They are using it because it is fast, accessible and can help them get started or work more efficiently. Our study suggests that perceived usefulness may be outweighing the students’ scepticism towards the reliability of outputs, as this scepticism does not seem to be slowing adoption. Nearly all student groups surveyed reported that they expect to continue using generative AI in the future, indicating that low levels of trust are unlikely to deter ongoing or increased use.

    Not all perceptions are equal

    While the “high use – low trust” paradox is evident across student groups, the study also reveals systematic differences in the adoption and perceptions of GAI by gender and by domicile status (UK v international students). Male and international students tend to report higher levels of both past and anticipated future use of GAI tools, and more permissive attitudes towards AI-assisted learning compared to female and UK-domiciled students. These differences should not necessarily be interpreted as evidence that some students are more ethical, critical or technologically literate than others. What we are likely seeing are responses to different pressures and contexts shaping how students engage with these tools. Particularly for international students, GAI can help navigate language barriers or unfamiliar academic conventions. In those circumstances, GAI may work as a form of academic support rather than a shortcut. Meanwhile, differences in attitudes by gender reflect wider patterns often observed on academic integrity and risk-taking, where female students often report greater concern about following rules and avoiding sanctions. These findings suggest that students’ engagement with GAI is influenced by their positionality within Higher Education, and not just by their individual attitudes.

    Different interpretations of institutional guidance

    Discrepancies by gender and domicile status go beyond patterns of use and trust, extending to how students interpret institutional guidance on generative AI. Most UK universities now publish policies outlining acceptable and unacceptable uses of GAI in relation to assessment and academic integrity, and typically present these rules as applying uniformly to all students. In practice, as evidenced by our study, students interpret these guidelines differently. UK-domiciled students, especially women, tend to adopt more cautious readings, sometimes treating permitted uses, such as using GAI for initial research or topic overviews, as potential misconduct. International students, by contrast, are more likely to express permissive or uncertain views, even in relation to practices that are more clearly prohibited. Shared rules do not guarantee shared understanding, especially if guidance is ambiguous or unevenly communicated. GAI is evolving faster than University policy, so addressing this unevenness in understanding is an urgent challenge for higher education.

    Where does the ‘problem’ lie?

    Students are navigating rapidly evolving technologies within assessment frameworks that were not designed with GAI in mind. At the same time, they are responding to institutional guidance that is frequently high-level, unevenly communicated and difficult to translate into everyday academic practice. Yet there is a tendency to treat GAI misuse as a problem stemming from individual student behaviour. Our findings point instead to structural and systemic issues shaping how students engage with these tools. From this perspective, variation in student behaviour could reflect the uneven inclusivity of current institutional guidelines. Even when policies are identical for all, the evidence indicates that they are not experienced in the same way across student groups, calling for a need to promote fairness and reduce differential risk at the institutional level.

    These findings also have clear implications for assessment and teaching. Since students are already using GAI widely, assessment design needs to avoid reactive attempts to exclude GAI. A more effective and equitable approach may involve acknowledging GAI use where appropriate, supporting students to engage with it critically and designing learning activities that continue to cultivate critical thinking, judgement and communication skills. In some cases, this may also mean emphasising in-person, discussion-based or applied forms of assessment where GAI offers limited advantage. Equally, digital literacy initiatives need to go beyond technical competence. Students require clearer and more concrete examples of what constitutes acceptable and unacceptable use of GAI in specific assessment contexts, as well as opportunities to discuss why these boundaries exist. Without this, institutions risk creating environments in which some students become too cautious in using GAI, while others cross lines they do not fully understand.

    More broadly, policymakers and institutional leaders should avoid assuming a single student response to GAI. As this study shows, engagement with these tools is shaped by gender, educational background, language and structural pressures. Treating the student body as homogeneous risks reinforcing existing inequalities rather than addressing them. Public debate about GAI in HE frequently swings between optimism and alarm. This research points to a more grounded reality where students are not blindly trusting AI, but their use of it is increasing, sometimes pragmatically, sometimes under pressure. As GAI systems continue evolving, understanding how students navigate these tools in practice is essential to developing policies, assessments and teaching approaches that are both effective and fair.

    You can find more information in our full research paper: https://www.tandfonline.com/doi/full/10.1080/13603108.2025.2595453

    Dr Carmen Cabrera is a Lecturer in Geographic Data Science at the Geographic Data Science Lab, within the University of Liverpool’s Department of Geography and Planning. Her areas of expertise are geographic data science, human mobility, network analysis and mathematical modelling. Carmen’s research focuses on developing quantitative frameworks to model and predict human mobility patterns across spatiotemporal scales and population groups, ranging from intraurban commutes to migratory movements. She is particularly interested in establishing methodologies to facilitate the efficient and reliable use of new forms of digital trace data in the study of human movement. Prior to her position as a Lecturer, Carmen completed a BSc and MSc in Physics and Applied Mathematics, specialising in Network Analysis. She then did a PhD at University College London (UCL), focussing on the development of mathematical models of social behaviours in urban areas, against the theoretical backdrop of agglomeration economies. After graduating from her PhD in 2021, she was a Research Fellow in Urban Mobility at the Centre for Advanced Spatial Analysis (CASA), at UCL, where she currently holds a honorary position.

    Dr Ruth Neville is a Research Fellow at the Centre for Advanced Spatial Analysis (CASA), UCL, working at the intersection of Spatial Data Science, Population Geography and Demography. Her PhD research considers the driving forces behind international student mobility into the UK, the susceptibility of student applications to external shocks, and forecasting future trends in applications using machine learning. Ruth has also worked on projects related to human mobility in Latin America during the COVID-19 pandemic, the relationship between internal displacement and climate change in the East and Horn of Africa, and displacement of Ukrainian refugees. She has a background in Political Science, Economics and Philosophy, with a particular interest in electoral behaviour.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • A Practical Guide – The 74

    A Practical Guide – The 74

    Republish This Article

    We want our stories to be shared as widely as possible — for free.

    Please view The 74’s republishing terms.


    Source link

  • Transform Your Classroom with Google Workspace AI Tools

    Transform Your Classroom with Google Workspace AI Tools

    The 2025-2026 school year brought a wave of powerful AI-enhanced tools to Google Workspace for Education. These aren”t just shiny new features—they’re practical classroom tools designed to save you time, personalize learning, and unlock student creativity. Best of all? Most are free for educators and students. Now that 2026 is upon us, I am excited to share with you some of my favorite new features that can be used in your classroom with your students. If you are already using these, I’d love to hear from you and learn how you are exploring AI and Google Workspace in your classrooms.

    Let’s walk through the standout Google features you should try with your students this year.

    Google Gemini for Education: Your AI Teaching Assistant

    Google Gemini isn’t just another chatbot. It’s an AI assistant built directly into the Google apps you already use—Docs, Slides, Sheets, Gmail, and Classroom. No more copying and pasting between tabs.

    • Why it matters: Gemini 2.5 Pro incorporates LearnLM, making it the world’s leading model for learning. It’s purpose-built for education with enterprise-grade data protection. Your data isn’t reviewed or used to train AI models.
    • Try this on Monday: Ask Gemini to “Create a lesson plan on photosynthesis aligned to NGSS standards” or “Generate a 25-question multiple choice practice exam from this syllabus.”

    Key Features for K-12 Classrooms:

    Deep Research — Students can research complex topics and receive synthesized reports with sources and citations in minutes. Instead of spending hours searching, they get a comprehensive report they can then explore further.

    Gemini Canvas — Create quizzes, practice tests, study guides, and visual timelines in one interactive space. Go from blank slate to dynamic preview in minutes. Students can build interactive prototypes and code snippets without knowing how to code.

    Gemini Live — Students can talk through complex concepts, get real-time help, and even share their screen or camera for personalized feedback on problem sets.

    What Are Google Gems?

    Think of a Gem as a specialized AI assistant you create for a specific purpose. Instead of writing the same prompt over and over in Gemini, you build a Gem once with custom instructions, and it becomes your go-to expert for that task.

    The difference: Regular Gemini is a generalist. A Gem is a specialist.

    For example, instead of typing “Create a Jeopardy game about the water cycle for 5th grade” every time you need a review game, you create a “Jeopardy Game” Gem that already knows your grade level, subject area, and preferred format. Then you just give it the topic.

    Creating Custom Gems: Build Your Own AI Experts

    Once you’re comfortable with Gemini, Google Gems let you create custom AI assistants tailored to your classroom needs.

    How it works: Give Gemini instructions, examples, and resources so it behaves exactly how you need it to. Upload unit plans, pacing guides, rubrics, or anchor texts so your Gem can reference them when creating content.

    Teacher-facing Gems:

    • Lesson Plan Generator — Aligned to your specific standards and teaching style
    • Parent Communicator — Drafts emails that match your tone and school policies
    • Emergency Sub Plan — Creates ready-to-go activities when you’re out sick
    • Standards Unpacker — Breaks down complex standards into teachable chunks

    Student-facing Gems: Create a Gem and share it with your class through Google Classroom. Students interact with your custom AI expert independently.

    • AI Tutor — Provides step-by-step help without giving away answers
    • Writing Coach — Gives feedback on essays and helps students revise
    • Study Partner — Creates practice questions from their notes
    • Career Explorer — Helps students research potential career paths

    EduGems: Pre-Made Gems by Eric Curts

    Don’t want to build Gems from scratch? Eric Curts (Control Alt Achieve) created EduGems—a growing library of ready-to-use Gems organized by category.

    How to use EduGems:

    1. Visit edugems.ai
    2. Browse by category or search for what you need
    3. Click any Gem to see details
    4. Click “Use” to open it in Gemini, or “Copy” to customize it
    • 🧑‍🏫 AI Tutor — Guides students through problems with questions, not answers. Great for homework help and independent practice.
    • 🎭 Reader’s Theater — Converts stories or historical events into scripts students can perform. Brings content to life through drama.
    • ❓ Jeopardy Game — Creates Jeopardy-style review games on any topic. Perfect for test prep and engagement.
    • 🤔 Student Brainstorming — Helps students generate and organize ideas for projects and writing assignments.
    • 💼 Career Explorer — Students explore career paths, learn about required education, and discover related occupations.
    • 📋 Lesson Plan — Generates complete lesson plans with objectives, activities, and assessments.
    • 📦 Standards Unpacker — Takes complex standards and breaks them into clear learning targets.
    • 🚨 Emergency Sub Plan — Creates complete sub plans with activities, materials, and instructions.
    • 🔀 Re-level Text — Adjusts reading level of any text for differentiation.
    • 📊 Assessment Data Analyzer — Analyzes assessment results and suggests targeted interventions.

    EduGems Categories:

    • Curriculum & Lesson Design (13 Gems) — Lesson plans, unit plans, choice boards, station rotations
    • Student Activities (11 Gems) — Games, simulations, debates, interviews
    • Assessment (15 Gems) — Quizzes, rubrics, test prep, data analysis
    • Support (14 Gems) — Accommodations, scaffolds, behavior plans, social stories
    • Literacy & Language (6 Gems) — Decodable texts, discussion prompts, sentence starters
    • Professional Tasks (11 Gems) — Newsletters, recommendation letters, PD plans

    Pro tip: Start with EduGems to see how effective Gems work, then customize them for your specific needs. You can also submit your own Gems to be added to the collection.

    Learn more: Watch Eric Curts’ complete Gems tutorial video or explore his AI resources at controlaltachieve.com.

    NotebookLM: Your AI Research Assistant

    Teachers and students work with overwhelming amounts of information. NotebookLM becomes an instant expert on whatever documents you upload.

    What makes it special: It grounds all responses in the specific documents you provide—no hallucinations, no random internet sources.

    Features you’ll use:

    • Audio Overviews — Turn lecture recordings, textbook chapters, or research papers into podcast-style audio summaries. Students can study anywhere—on the bus, at practice, during their commute.
    • Document synthesis — Upload PDFs, articles, unit plans, and curriculum resources. Ask questions and get answers pulled directly from your materials. Create summaries, study guides, and student-friendly resources instantly.
    • Student independence — Help students understand complex texts without constant teacher intervention. They can ask clarifying questions and get explanations grounded in their assigned readings.

    Google Vids: Create Professional Video Content in Minutes

    Student attention spans are shrinking, and teachers need tools to deliver content that sticks. Google Vids is Google’s answer: an AI-powered video creation tool that lives right in your Google Workspace.

    What Makes Google Vids Different?

    Think Google Slides turned 90 degrees—instead of slides arranged vertically, you work with scenes arranged horizontally. If you can use Google Slides, you can use Google Vids. But here’s the game-changer: it’s powered by Gemini AI.

    The “Help me create” feature: Type what you want to create (“Make a 3-minute tutorial on the water cycle for 5th grade”), and Google Vids generates a complete first draft in under 60 seconds—script, visuals, timing, transitions, and all. You customize from there instead of starting from scratch.

    Key Features Teachers Love:

    • AI-Powered Creation — Describe your video in a sentence, and Gemini builds the first draft for you. Add your own screenshots, adjust the timing, choose AI voice or record your own.
    • Convert Slides to Videos — Already have a Google Slides presentation? Import it into Vids and transform it into an engaging video with music, transitions, and narration in minutes.
    • Stock Media Library — Access thousands of royalty-free videos, images, music tracks, sound effects, GIFs, and stickers without leaving the platform.
    • Professional Templates — Start with beautifully designed templates for tutorials, announcements, student projects, and more.
    • Real-Time Collaboration — Work together on video projects just like you would in Google Docs. Perfect for group projects or co-planning with colleagues.
    • Seamless Google Classroom Integration — Assign videos as templates so each student gets their own copy. Review student work directly in Classroom and see their progress in real-time.

    For Teachers: Scale Your Impact

    Create professional development videos, flipped classroom content, and instructional materials in 20-30 minutes instead of 2-3 hours.

    Practical use cases:

    • Tool tutorials — Record once, share forever. Every new teacher gets instant access to training.
    • Flipped lessons — Create micro-lectures students watch at home, freeing up class time for hands-on work.
    • Lab procedures — Record safety demos and complex procedures students can review anytime.
    • Personalized feedback — Send quick video messages instead of lengthy written comments.
    • Professional development — Build a library of PD resources teachers can access on-demand.

    For Students: Voice, Choice, and Creativity

    Google Vids gives students an accessible way to demonstrate understanding without needing advanced tech skills.

    Student projects:

    • Video essays — Students explain their thinking, cite sources, and present arguments visually.
    • Book reports — Create “movie trailers” for novels or informational texts.
    • Science demonstrations — Record experiments with narration explaining the process.
    • Digital portfolios — Showcase learning growth throughout the year.
    • Public service announcements — Combine research with persuasive communication skills.

    Scaffolding tip: Start simple. Have students brainstorm in Google Keep, create a 3-slide presentation in Slides, import those slides into Vids, replace slides with video B-roll, add music and transitions. This progression teaches cross-tool workflows while building video literacy skills.

    Getting Started is Simple

    Access Google Vids at vids.google.com or vids.new. No software to download, no complicated setup.

    Three ways to start:

    1. Record — Easiest for screencasts and quick tutorials on Chromebooks
    2. Use templates — Start with professional designs for various purposes
    3. “Help me create” — Describe what you want and let AI build the first draft

    Videos save automatically to Google Drive. Share through Classroom, Drive links, or export as MP4 files.

    Why It Matters for K-12

    Google Vids democratizes video creation. Students and teachers without technical expertise or expensive software can now create professional-looking content. This levels the playing field and opens doors for creativity that were previously closed.

    Want the complete guide? Check out these in-depth resources:

    Getting Started: Your Action Plan

    This week:

    1. Visit gemini.google.com with your school Google account
    2. Ask it to create one lesson plan or assessment
    3. Try Deep Research on a topic you’re teaching next week

    This month:

    1. Create your first custom Gem for a unit you teach frequently
    2. Have students upload their notes to NotebookLM and create an Audio Overview
    3. Record one instructional video in Google Vids

    This semester:

    1. Share the college student offer with your seniors
    2. Build a library of custom Gems for different units
    3. Let students create their own Gems as study partners
    4. Assign a Google Vids project—have students create a 2-minute video explaining a concept, book report trailer, or science demonstration

    One Important Reminder

    With all these powerful AI tools at our fingertips, don’t forget that the most meaningful learning still happens through conversation, hands-on exploration, and human connection. Technology should enhance—not replace—the relationships and dialogue that make your classroom special.

    Use these tools to reclaim your time and energy so you can focus on what matters most: your students.

    Frequently Asked Questions

    Want to Learn More?

    Take a free course: Getting Started with Google AI from Google for Education

    Explore use cases: 100+ ways to use Gemini in education

    Deep dive: Teaching Channel’s course 5381: Teaching with Google’s AI Tools covers Gemini, NotebookLM, Google Vids, and image creation


    Ready to try one of these features? Pick just one from this list and test it this week. Reply and let me know which one you chose and how it went.

    • Jeff Bradbury, your digital learning coach 🎸

    Don’t Miss the Next EdTech Breakthrough

    Google isn’t done innovating, and neither are dozens of other EdTech companies building tools specifically for K-12 educators. New features drop every month—some game-changers, some duds.

    I test them all so you don’t have to.

    Join 20,000+ educators who get my weekly newsletter with:

    ✅ Early access to tutorials on new classroom tech

    ✅ Honest reviews (I’ll tell you when something isn’t worth your time)

    ✅ Ready-to-steal lesson ideas and project templates

    ✅ Time-saving workflows that actually work in real classrooms

    No fluff. No vendor pitches. Just practical strategies from a teacher who’s actually using these tools with students.

    Subscribe to the TeacherCast Newsletter →

    Upgrade Your Teaching Toolkit Today

    Get weekly EdTech tips, tool tutorials, and podcast highlights delivered to your inbox. Plus, receive a free chapter from my book Impact Standards when you join.


    Discover more from TeacherCast Educational Network | Developing Standards-Based Instructional Technology Integration

    Subscribe to get the latest posts sent to your email.

    Source link

  • The Top 20 Education Next Articles of 2025

    The Top 20 Education Next Articles of 2025

    In a journal devoted to U.S. education reform, some recurring themes in its content are expected: student achievement, curriculum, teacher effectiveness, school choice, testing, accountability. Other topics are more contemporaneous, reflecting the functional reality of American schooling in its present context. The latter group may capture just a moment in time and give future education historians a glimpse at what mattered to early 21st century reformers (and seem quaint in hindsight). It may also reflect prescient insights from leaders, thinkers, and scholars—contributions that document the early stages of a significant transformation in education policy and practice (and later be deemed ahead of their time).

    What we can say confidently is that Education Next published a good mix of the classic and the contemporary in 2025, just as it has each year in its quarter century of existence. You can see for yourself below in our annual Top 20 list of most-read articles, which features an assortment of writings by researchers, journalists, academics, and teachers.

    Among the traditional fare, readers turned to EdNext to keep apprised of developments in classroom instruction, from reading to literacy to history. They wanted to know if the U.S. might be better off evaluating schools using the European model of inspections rather than, or in addition to, student test scores. Amid ongoing debates about the merits of using standardized tests to gauge student preparation, readers were drawn to the findings of researchers in Missouri that 8th graders’ performance on the state’s MAP test are highly predictive of college readiness. In the realm of teachers and teaching, proponents of merit pay received a boost by an analysis of Dallas ISD’s ACE program, which was shown to improve both student performance and teacher retention in the district.

    As for school choice, Education Next followed successes like the expansion of education savings account programs, the proliferation of microschools, and the federal scholarship tax credit passed by Congress as part of the One Big Beautiful Bill Act. But the stumbles of choice had more of a gravitational pull for readers. There were the defeats of private-school voucher measures in three states—continuing a long string of choice failures at the ballot box. There are the enrollment struggles of Catholic schools, which researchers found are impacted by competition from tuition-free charter schools. And just when Catholic and other private religious schools could have gotten a shot in the arm by being allowed to reformulate as religious charters, the Supreme Court deadlocked on the constitutionality of the question, leaving the matter to be relitigated for another day.

    There was no shortage of timely topics that exploded onto the scene and captivated readers. American education is still grappling with the fallout from the Covid-era school shutdowns, now five years in the rearview. Many harbor consternation about the politics of pandemic closures, as demonstrated by the enthusiasm over a new book that autopsied the decisions of that era and the subsequent book review that catapulted onto this year’s list (an unusual feat!). And now there’s research to corroborate the disaster closures were for public education. Two Boston University scholars find evidence of diminishing enrollment in public middle schools, an indication that families whose children were in the early grades in 2020 are parting for the more rigorous shores of private choice. But the post-pandemic problems in schooling have not been uniform. In one of the most-read articles this year, founding EdNext editor Paul Peterson and Michael Hartney show how, based on recent NAEP results, learning loss was greater among students in blue states that had more prolonged school shutdowns than in red states that reopened more quickly.

    Meanwhile, everyone in education circles continues to grapple with what to do about technology in the classroom. Two writers did so in our own pages, presenting opposite perspectives on Sal Khan’s prediction that AI will soon transform education with the equivalent of a personalized tutor for each student. And one of our favorite cognitive scientists gave readers a different way of thinking about how digital devices affect student attention.

    It is perhaps fitting that our most-read article of 2025 was also the cover story of the last print issue of Education Next. (You can read more about our transition to a web-only publication here.) After Donald Trump reassumed the presidency this year and his administration enacted major reductions to the federal bureaucracy, several education-focused programs (and indeed the entire U.S. Department of Education) came under intense scrutiny. One target was Head Start, in part because Project 2025 called to eliminate the program on the grounds it is “fraught with scandal and abuse” and has “little or no long-term academic value for children.” Paul von Hippel, Elise Chor, and Leib Lurie tested those claims against the research and found little basis for them. Yet they also highlight lingering questions about the program’s impact on students’ long-term success—and opportunities to answer them with new research. As of this writing, the nation’s largest early-education program survives, but the sector is still watching and waiting.

    And so are we all for what will happen next in education. Some issues captured by Education Next this year will continue into 2026. Some will flame out. And others that are unforeseen will arise. Readers can depend on Education Next to lean into all the twists and turns that come in the year ahead.

    The full top 20 list is here:

    Source link

  • Texas Universities Deploy AI Tools to Review How Courses Discuss Race and Gender – The 74

    Texas Universities Deploy AI Tools to Review How Courses Discuss Race and Gender – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    A senior Texas A&M University System official testing a new artificial intelligence tool this fall asked it to find how many courses discuss feminism at one of its regional universities. Each time she asked in a slightly different way, she got a different number.

    “Either the tool is learning from my previous queries,” Texas A&M system’s chief strategy officer Korry Castillo told colleagues in an email, “or we need to fine tune our requests to get the best results.”

    It was Sept. 25, and Castillo was trying to deliver on a promise Chancellor Glenn Hegar and the Board of Regents had already made: to audit courses across all of the system’s 12 universities after conservative outrage over a gender-identity lesson at the flagship campus intensified earlier that month, leading to the professor’s firing and the university president’s resignation

    Texas A&M officials said the controversy stemmed from the course’s content not aligning with its description in the university’s course catalog and framed the audit as a way to ensure students knew what they were signing up for. As other public universities came under similar scrutiny and began preparing to comply with a new state law that gives governor-appointed regents more authority over curricula, they, too, announced audits.

    Records obtained by The Texas Tribune offer a first look at how Texas universities are experimenting with AI to conduct those reviews. 

    At Texas A&M, internal emails show staff are using AI software to search syllabi and course descriptions for words that could raise concerns under new system policies restricting how faculty teach about race and gender. 

    At Texas State, memos show administrators are suggesting faculty use an AI writing assistant to revise course descriptions. They urged professors to drop words such as “challenging,” “dismantling” and “decolonizing” and to rename courses with titles like “Combating Racism in Healthcare” to something university officials consider more neutral like “Race and Public Health in America.”

    Read Texas State University’s guide to faculty on how to review their curriculum with AI

    While school officials describe the efforts as an innovative approach that fosters transparency and accountability, AI experts say these systems do not actually analyze or understand course content, instead generating answers that sound right based on patterns in their training data.

    That means small changes in how a question is phrased can lead to different results, they said, making the systems unreliable for deciding whether a class matches its official description. They warned that using AI this way could lead to courses being flagged over isolated words and further shift control of teaching away from faculty and toward administrators.

    “I’m not convinced this is about serving students or cleaning up syllabi,” said Chris Gilliard, co-director of the Critical Internet Studies Institute. “This looks like a project to control education and remove it from professors and put it into the hands of administrators and legislatures.”

    Setting up the tool

    During a board of regents meeting last month, Texas A&M System leaders described the new processes they were developing to audit courses as a repeatable enforcement mechanism. 

    Vice Chancellor for Academic Affairs James Hallmark said the system would use “AI-assisted tools” to examine course data under “consistent, evidence-based criteria,” which would guide future board action on courses. Regent Sam Torn praised it as “real governance,” saying Texas A&M was “stepping up first, setting the model that others will follow.” 

    That same day, the board approved new rules requiring presidents to sign off on any course that could be seen as advocating for “race and gender ideology” and prohibiting professors from teaching material not on the approved syllabus for a course.

    In a statement to the Tribune, Chris Bryan, the system’s vice chancellor for marketing and communications, said Texas A&M is using OpenAI services through an existing subscription to aid the system’s course audit and that the tool is still being tested as universities finish sharing their course data. He said “any decisions about appropriateness, alignment with degree programs, or student outcomes will be made by people, not software.”

    In records obtained by the Tribune, Castillo, the system’s chief strategy officer, told colleagues to prepare for about 20 system employees to use the tool to make hundreds of queries each semester. 

    The records also show some of the concerns that arose from early tests of the tool.  

    When Castillo told colleagues about the varying results she obtained when searching for classes that discuss feminism, deputy chief information officer Mark Schultz cautioned that the tool came with “an inherent risk of inaccuracy.”

    “Some of that can be mitigated with training,” he said, “but it probably can’t be fully eliminated.”

    Schultz did not specify what kinds of inaccuracies he meant. When asked if the potential inaccuracies had been resolved, Bryan said, “We are testing baseline conversations with the AI tool to validate the accuracy, relevance and repeatability of the prompts.” He said this includes seeing how the tool responds to invalid or misleading prompts and having humans review the results.

    Experts said the different answers Castillo received when she rephrased her question reflect how these systems operate. They explained that these kinds of AI tools generate their responses by predicting patterns and generating strings of text.

    “These systems are fundamentally systems for repeatedly answering the question ‘what is the likely next word’ and that’s it,” said Emily Bender, a computational linguist at the University of Washington. “The sequence of words that comes out looks like the kind of thing you would expect in that context, but it is not based on reason or understanding or looking at information.”

    Because of that, small changes to how a question is phrased can produce different results. Experts also said users can nudge the model toward the answer they want. Gilliard said that is because these systems are also prone to what developers call “sycophancy,” meaning they try to agree with or please the user. 

    “Very often, a thing that happens when people use this technology is if you chide or correct the machine, it will say, ‘Oh, I’m sorry’ or like ‘you’re right,’ so you can often goad these systems into getting the answer you desire,” he said.

    T. Philip Nichols, a Baylor University professor who studies how technology influences teaching and learning in schools, said keyword searches also provide little insight into how a topic is actually taught. He called the tool “a blunt instrument” that isn’t capable of understanding how certain discussions that the software might flag as unrelated to the course tie into broader class themes. 

    “Those pedagogical choices of an instructor might not be present in a syllabus, so to just feed that into a chatbot and say, ‘Is this topic mentioned?’ tells you nothing about how it’s talked about or in what way,” Nichols said. 

    Castillo’s description of her experience testing the AI tool was the only time in the records reviewed by the Tribune when Texas A&M administrators discussed specific search terms being used to inspect course content. In another email, Castillo said she would share search terms with staff in person or by phone rather than email. 

    System officials did not provide the list of search terms the system plans to use in the audit.

    Martin Peterson, a Texas A&M philosophy professor who studies the ethics of technology, said faculty have not been asked to weigh in on the tool, including members of the university’s AI council. He noted that the council’s ethics and governance committee is charged with helping set standards for responsible AI use.

    While Peterson generally opposes the push to audit the university system’s courses, he said he is “a little more open to the idea that some such tool could perhaps be used.”

    “It is just that we have to do our homework before we start using the tool,” Peterson said.

    AI-assisted revisions

    At Texas State University, officials ordered faculty to rewrite their syllabi and suggested they use AI to do it.

    In October, administrators flagged 280 courses for review and told faculty to revise titles, descriptions and learning outcomes to remove wording the university said was not neutral. Records indicate that dozens of courses set to be offered by the College of Liberal Arts in the Spring 2026 semester were singled out for neutrality concerns. They included courses such as Intro to Diversity, Social Inequality, Freedom in America, Southwest in Film and Chinese-English Translation.

    Faculty were given until Dec. 10 to complete the rewrites, with a second-level review scheduled in January and the entire catalog to be evaluated by June. 

    Administrators shared with faculty a guide outlining wording they said signaled advocacy. It discouraged learning outcomes that describe students “measure or require belief, attitude or activism (e.g., value diversity, embrace activism, commit to change).”

    Administrators also provided a prompt for faculty to paste into an AI writing assistant alongside their materials. The prompt instructs the chatbot to “identify any language that signals advocacy, prescriptive conclusions, affective outcomes or ideological commitments” and generate three alternative versions that remove those elements. 

    Jayme Blaschke, assistant director of media relations at Texas State, described the internal review as “thorough” and “deliberative,” but would not say whether any classes have already been revised or removed, only that “measures are in place to guide students through any adjustments and keep their academic progress on track.” He also declined to explain how courses were initially flagged and who wrote the neutrality expectations.

    Faculty say the changes have reshaped how curriculum decisions are made on campus.

    Aimee Villarreal, an assistant professor of anthropology and president of Texas State’s American Association of University Professors chapter, said the process is usually faculty-driven and unfolds over a longer period of time. She believes the structure of this audit allows administrators to more closely monitor how faculty describe their disciplines and steer how that material must be presented.

    She said the requirement to revise courses quickly or risk having them removed from the spring schedule has created pressure to comply, which may have pushed some faculty toward using the AI writing assistant.

    Villarreal said the process reflects a lack of trust in faculty and their field expertise when deciding what to teach.

    “I love what I do,” Villarreal said, “and it’s very sad to see the core of what I do being undermined in this way.”

    Nichols warned the trend of using AI in this way represents a larger threat. 

    “This is a kind of de-professionalizing of what we do in classrooms, where we’re narrowing the horizon of what’s possible,” he said. “And I think once we give that up, that’s like giving up the whole game. That’s the whole purpose of why universities exist.”

    The Texas Tribune partners with Open Campus on higher education coverage.

    Disclosure: Baylor University, Texas A&M University and Texas A&M University System have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

    This article first appeared on The Texas Tribune.


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link