Category: AI in Education

  • Most students, educators use AI–but opinions differ on ethical use

    Most students, educators use AI–but opinions differ on ethical use

    Key points:

    As generative AI continues to gain momentum in education each year, both its adoption and the attitudes toward its use have steadily grown more positive, according to a new report from Quizlet.

    The How America Learns report explores U.S. student, teacher, and parent perspectives on AI implementation, digital learning and engagement, and success beyond the classroom.

    “At Quizlet, we’ve spent nearly two decades putting students at the center of everything we do,” said Quizlet CEO Kurt Beidler. “We fielded this research to better understand the evolving study habits of today’s students and ensure we’re building tools that not only help our tens of millions of monthly learners succeed, but also reflect what they truly need from their learning experience.”

    AI becomes ubiquitous in education
    As generative AI solutions gain traction in education year over year, adoption and attitudes towards the technology have increased and improved. Quizlet’s survey found that 85 percent of respondents–including high school and college teachers, as well as students aged 14-22–said they used AI technology, a significant increase from 66 percent in 2024. Of those respondents using AI, teachers now outpace students in AI adoption (87 percent vs 84 percent), compared to 2024 findings when students slightly outpaced teachers.

    Among the 89 percent of all students who say they use AI technology for school (up from 77 percent in 2024), the top three use cases are summarizing or synthesizing information (56%), research (46 percent), and generating study guides or materials (45 percent). The top uses of AI technology among teachers remained the same but saw significant growth YoY: research (54 percent vs. 33 percent), summarizing or synthesizing information (48 percent vs. 30 percent), and generating classroom materials like tests and assignments (45 percent vs. 31 percent).

    While the emergence of AI has presented new challenges related to academic integrity, 40 percent of respondents believe that AI is used ethically and effectively in the classroom. However, students are significantly less likely to feel this way (29 percent) compared to parents (46 percent) and teachers (57 percent), signaling a continued need for education and guidelines on responsible use of AI technology for learning.

    “Like any new technology, AI brings incredible opportunities, but also a responsibility to use it thoughtfully,” said Maureen Lamb, AI Task Force Chair and Language Department Chair at Miss Porter’s School. “As adoption in education grows, we need clear guidelines that help mitigate risk and unlock the full potential of AI.  Everyone–students, educators, and parents–has a role to play in understanding not just how to use AI, but when and why it should be used.”

    Digital learning demands growth while equity gap persists
    Just as AI is becoming a staple in education, survey results also found that digital learning is growing in popularity, with 64 percent of respondents expressing that digital learning methods should be equal or greater than traditional education methods, especially teachers (71 percent).

    Respondents indicated that flexibility (56 percent), personalized learning (53 percent), and accessibility (49 percent) were the most beneficial aspects of digital learning. And with 77 percent of students making sacrifices, including loss of sleep, personal time, and missed extracurriculars due to homework, digital learning offers a promising path toward a more accommodating approach. 

    While the majority of respondents agreed on the importance and benefits of digital learning, results also pointed to a disparity in access to these tools. Despite nearly half (49 percent) of respondents agreeing that all students in their community have equal access to learning materials, technology, and support to succeed academically, that percentage drops to 43 percent for respondents with diagnosed or self-identified learning differences, neurodivergent traits, or accessibility needs.

    Maximizing success for academic and real-world learning
    While discussion around AI and education has largely focused on use cases for academic learning, the report also uncovered an opportunity for greater support to help drive success beyond the classroom and provide needed resources for real-world learning.

    Nearly 60 percent of respondents believe a four-year college degree is of high importance for achieving professional success (58 percent). However, more than one-third of students, teachers, and parents surveyed believe schools are not adequately preparing students for success beyond the classroom.

    “As we drive the next era of AI-powered learning, it’s our mission to give every student and lifelong learner the tools and confidence to succeed, no matter their motivation or what they’re striving to achieve,” said Beidler. “As we’ve seen in the data, there’s immense opportunity when it comes to career-connected learning, from life skills development to improving job readiness, that goes well beyond the classroom and addresses what we’re hearing from students and teachers alike.”

    The top five skills respondents indicated should be prioritized more in schools are critical thinking and problem solving (66 percent), financial literacy (64 percent), mental health management (58 percent), leadership skills (52 percent), and creativity and innovation (50 percent).

    This press release originally appeared online.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • Creative approaches to teaching math can help fill AI talent gap

    Creative approaches to teaching math can help fill AI talent gap

    Key points:

    Not surprisingly, jobs in AI are the fastest growing of any in the country, with a 59 percent increase in job postings between January 2024 and November 2024. Yet we continue to struggle with growing a workforce that is proficient in STEM. 

    To fill the AI talent pipeline, we need to interest kids in STEM early, particularly in math, which is critical to AI. But that’s proven difficult. One reason is that math is a stumbling block. Whether because of math anxiety, attitudes they’ve absorbed from the community, inadequate curricular materials, or traditional teaching methods, U.S. students either avoid or are not proficient in math.  

    A recent Gallup report on Math Matters reveals that the U.S. public greatly values math but also experiences significant gaps in learning and confidence, finding that: 

    • 95 percent of U.S. adults say that math is very or somewhat important in their work life 
    • 43 percent of U.S. adults wish they had learned more math skills in middle or high school. 
    •  24 percent of U.S. adults say that math makes them feel confused  

    Yet this need not be the case. Creative instruction in math can change the equation, and it is available now. The following three examples from respected researchers in STEM education demonstrate this fact. 

    The first is a recently published book by Susan Jo Russell and Deborah Schifter, Interweaving Equitable Participation and Deep Mathematics. The book provides practical tools and a fresh vision to help educators create math classrooms where all students can thrive. It tackles a critical challenge: How do teachers ensure that all students engage deeply with rigorous mathematics? The authors pose and successfully answer key questions: What does a mathematical community look like in an elementary classroom? How do teachers engage young mathematicians in deep and challenging mathematical content? How do we ensure that every student contributes their voice to this community? 

    Through classroom videos, teacher reflections, and clear instructional frameworks, Russell and Schifter bring readers inside real elementary classrooms where all students’ ideas and voices matter. They provide vivid examples, insightful commentary, and ready-to-use resources for teachers, coaches, and school leaders working to make math a subject where every student sees themselves as capable and connected. 

    Next is a set of projects devoted to early algebra. Significantly, research shows that how well students perform in Algebra 2 is a leading indicator of whether they’ll get into college, graduate from college, or become a top income earner. But introducing algebra in middle school, as is the common practice, is too late, according to researchers Maria Blanton and Angela Gardiner of TERC, a STEM education research nonprofit. Instead, learning algebra must begin in K-5, they believe. 

    Students would be introduced to algebraic concepts rather than algebra itself, becoming familiar with ways of thinking using pattern and structure. For example, when students understand that whenever they add two odd numbers together, they get an even number, they’re recognizing important mathematical relationships that are critical to algebra. 

    Blanton and Gardiner, along with colleagues at Tufts University, University of Wisconsin Madison, University of Texas at Austin, Merrimack College, and City College of New York, have already demonstrated the success of an early algebra approach through Project LEAP, the first early algebra curriculum of its kind for grades K–5, funded in part by the National Science Foundation.  

    If students haven’t been introduced to algebra early on, the ramp-up from arithmetic to algebra can be uniquely difficult. TERC researcher Jennifer Knudsen told me that elementary to middle school is an important time for students’ mathematical growth. 

    Knudsen’s project, MPACT, the third example of creative math teaching, engages middle school students in 3D making with everything from quick-dry clay and cardboard to digital tools for 3D modeling and printing. The project gets students involved in designing objects, helping them develop understanding of important mathematical topics in addition to spatial reasoning and computational thinking skills closely related to math. Students learn concepts and solve problems with real objects they can hold in their hands, not just with words and diagrams on paper.  

    So far, the evidence is encouraging: A two-year study shows that 4th–5th graders demonstrated significant learning gains on an assessment of math, computational thinking, and spatial reasoning. These creative design-and-making units are free and ready to download. 

    Math is critical for success in STEM and AI, yet too many kids either avoid or do not succeed in it. Well-researched interventions in grade school and middle school can go a long way toward teaching essential math skills. Curricula for creating a math community for deep learning, as well as projects for Early Algebra and MPACT, have shown success and are readily available for school systems to use.

    We owe it to our students to take creative approaches to math so they can prepare for future AI and STEM professions. We owe it to ourselves to help develop a skilled STEM and AI workforce, which the nation needs to stay competitive. 

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • 4 ways AI is empowering the next generation of great teachers

    4 ways AI is empowering the next generation of great teachers

    Key points:

    In education, we often talk about “meeting the moment.” Our current moment presents us with both a challenge and an opportunity: How can we best prepare and support our teachers as they navigate increasingly complex classrooms while also dealing with unprecedented burnout and shortages within the profession?

    One answer could lie in the thoughtful integration of artificial intelligence to help share feedback with educators during training. Timely, actionable feedback can support teacher development and self-efficacy, which is an educator’s belief that they will make a positive impact on student learning. Research shows that self-efficacy, in turn, reduces burnout, increases job satisfaction, and supports student achievement. 

    As someone who has spent nearly two decades supporting new teachers, I’ve witnessed firsthand how practical feedback delivered quickly and efficiently can transform teaching practice, improve self-efficacy, and support teacher retention and student learning.

    AI gives us the chance to deliver this feedback faster and at scale.

    A crisis demanding new solutions

    Teacher shortages continue to reach critical levels across the country, with burnout cited as a primary factor. A recent University of Missouri study found that 78 percent of public school teachers have considered quitting their profession since the pandemic. 

    Many educators feel overwhelmed and under-supported, particularly in their formative years. This crisis demands innovative solutions that address both the quality and sustainability of teaching careers.

    What’s often missing in teacher development and training programs is the same element that drives improvement in other high-performance fields: immediate, data-driven feedback. While surgeons review recordings of procedures and athletes get to analyze game footage, teachers often receive subjective observations weeks after teaching a lesson, if they receive feedback at all. Giving teachers the ability to efficiently reflect on AI-generated feedback–instead of examining hours of footage–will save time and potentially help reduce burnout.

    The transformative potential of AI-enhanced feedback

    Recently, Relay Graduate School of Education completed a pilot program with TeachFX using AI-powered feedback tools that showed remarkable promise for our teacher prep work. Our cohort of first- and second-year teachers more than doubled student response opportunities, improved their use of wait time, and asked more open-ended questions. Relay also gained access to objective data on student and teacher talk time, which enhanced our faculty’s coaching sessions.

    Program participants described the experience as “transformative,” and most importantly, they found the tools both accessible and effective.

    Here are four ways AI can support teacher preparation through effective feedback:

    1. Improving student engagement through real-time feedback

    Research reveals that teachers typically dominate classroom discourse, speaking for 70-80 percent of class time. This imbalance leaves little room for student voices and engagement. AI tools can track metrics such as student-versus-teacher talk time in real time, helping educators identify patterns and adjust their instruction to create more interactive, student-centered classrooms.

    One participant in the TeachFX pilot said, “I was surprised to learn that I engage my students more than I thought. The data helped me build on what was working and identify opportunities for deeper student discourse.”

    2. Freeing up faculty to focus on high-impact coaching

    AI can generate detailed transcripts and visualize classroom interactions, allowing teachers to reflect independently on their practice. This continuous feedback loop accelerates growth without adding to workloads.

    For faculty, the impact is equally powerful. In our recent pilot with TeachFX, grading time on formative observation assignments dropped by 60 percent, saving up to 30 hours per term. This reclaimed time was redirected to what matters most: meaningful mentoring and modeling of best practices with aspiring teachers.

    With AI handling routine analysis, faculty could consider full class sessions rather than brief segments, identifying strategic moments throughout lessons for targeted coaching. 

    The human touch remains essential, but AI amplifies its reach and impact.

    3. Scaling high-quality feedback across programs

    What began as a small experiment has grown to include nearly 800 aspiring teachers. This scalability can more quickly reduce equity issues in teacher preparation.

    Whether a teaching candidate is placed in a rural school or urban district, AI can ensure consistent access to meaningful, personalized feedback. This scalable approach helps reduce the geographic disparities that often plague teacher development programs.

    Although AI output must be checked so that any potential biases that come through from the underlying datasets can be removed, AI tools also show promise for reducing bias when used thoughtfully. For example, AI can provide concrete analysis of classroom dynamics based on observable actions such as talk time, wait time, and types of questions asked. While human review and interpretation remains essential–to spot check for AI hallucinations or other inaccuracies and interpret patterns in context–purpose-built tools with appropriate guardrails can help deliver more equitable support.

    4. Helping teachers recognize and build on their strengths

    Harvard researchers found that while AI tools excel at using supportive language to appreciate classroom projects–and recognize the work that goes into each project–students who self-reported high levels of stress or low levels of enjoyment said the feedback was often unhelpful or insensitive. We must be thoughtful and intentional about the AI-powered feedback we share with students.

    AI can also help teachers see what they themselves are doing well, which is something many educators struggle with. This strength-based approach builds confidence and resilience. As one TeachFX pilot participant noted, “I was surprised at the focus on my strengths as well and how to improve on them. I think it did a good job of getting good details on my conversation and the intent behind it. ”

    I often tell new teachers: “You’ll never see me teach a perfect lesson because perfect lessons don’t exist. I strive to improve each time I teach, and those incremental gains add up for students.” AI helps teachers embrace this growth mindset by making improvement tangible and achievable.

    The moment is now

    The current teacher shortage is a crisis, but it’s also an opportunity to reimagine how we support teachers.

    Every student deserves a teacher who knows how to meaningfully engage them. And every teacher deserves timely, actionable feedback.  The moment to shape AI’s role in teacher preparation is now. Let’s leverage these tools to help develop confident, effective teachers who will inspire the next generation of learners.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Integrating AI into education is not as daunting as it seems

    Integrating AI into education is not as daunting as it seems

    Key points:

    Forty-some years ago, students sat in straight rows with books, papers, and pencils neatly lined up on their desks. But beginning in the 1990s, educators faced very different classrooms as computers found their way into schools.

    For most teachers, it felt daunting to figure out how to integrate new tools into curriculum requirements–and how to find the time to make it happen. To help this digital transformation then, I joined the South Dakota Department of Education to lead summer immersion teacher training on technology integration, traveling the state to help schools understand how to use new tools like video systems. I was one of many who helped educators overcome that initial learning curve–and now tools like computers are an integral part of the education system.

    Let’s face it: The advent of new technologies can be overwhelming. Adjusting to them takes time. Now, with the coming of age of AI, teachers, administrators, students, and parents have endless questions and ideas on how it might positively or negatively influence education. I’ve seen it in my current role, in which I continue to empower educators and states to use modern technology to support student learning. And while concerns about AI are valid, there are many positive potential outcomes. For educators in particular, AI can be a huge value-add, automating certain administrative tasks, helping understand and predict student success and struggles, and even helping tailor instruction for individual students.

    The upside is huge. As schools embark on their AI journeys, it’s important to remember that we’ve been here before–from the introduction of the internet in classrooms to the abrupt shift to e-learning at the outset of COVID-19. Superintendents, boards of education, and other education leaders can draw on important lessons from prior technological transformations to fully take advantage of this one.

    Here are some rules of the road for navigating the integration of disruptive technologies:

    1. Choose the right tools. The AI tool(s) you choose can have varying results. School districts should prioritize proven technologies with a track record in education. For students, this includes adaptive learning platforms or virtual tutors. Some of the best tools are those that are specifically designed by and for educators to expedite administrative tasks such as grading and lesson planning. Even more valuable is the ability to support education-specific issues such as identifying struggling students with early warning systems and using AI to provide projections for student futures.

      2. Training is everything. With proper training, AI can be less intimidating. We don’t expect students to understand a new concept by reading a few paragraphs in a textbook, and we shouldn’t expect teachers to figure out how to best use AI on their own. President Trump’s recent executive order prioritizes the use of AI in discretionary grant programs for teacher training, which is an important step in the right direction.

      3. Engage parents. Moms and dads may be concerned if they hear–without a deeper explanation–that a school board is rolling out an AI tool to help with teaching or administrative tasks in their children’s education. Keep an open line of communication with the guardians of students about how and why AI is being used. Point parents to resources to help them improve their own AI literacy. To a reasonable degree, invite feedback. This two-way communication helps build trust, allay fears and clarify any misconceptions, to the benefit of everyone involved, including, most importantly, the students.

      4. Humans must be involved. The stakes are high. AI is not perfect. Administrators must ensure they and the educators using AI tools are double checking the work. In the parlance of responsible AI, this is known as having a “human in the loop,” and it’s especially important when the outcomes involve children’s futures. This important backstop instills confidence in the parents, students and educators.

      5. Regularly evaluate if the tools are living up to expectations. The point of integrating AI into teachers’ and administrators’ workstreams is to lighten their load so they can spend more time and energy on students. Over time, AI models can decay and bias can be introduced, reducing the effectiveness of AI outputs. So, regular monitoring and evaluating is important. Educators and administrators should regularly check in to determine if the integration of AI is supporting their goals.

      6. The learning curve may create more work at first–but the payoff is exponential. Early adoption is important. I worked with school districts that pushed off integrating digital technologies–ultimately, it put the educators behind their peers. AI can make a difference in educators’ lives by freeing them up from administrative burdens to focus on what really matters–the students.

      This is the start of a journey–one that I believe is truly exciting! It’s not the first nor the last time educators adopt new technologies. Don’t let AI overwhelm or distract you from tried-and-true integration techniques. Yes, the technology is different–but educators are always adapting, and it will be the same with AI, to the benefit of educators and students.

      Latest posts by eSchool Media Contributors (see all)

    Source link

  • AI teacher tools display racial bias when generating student behavior plans, study finds

    AI teacher tools display racial bias when generating student behavior plans, study finds

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    Asked to generate intervention plans for struggling students, AI teacher assistants recommended more-punitive measures for hypothetical students with Black-coded names and more supportive approaches for students the platforms perceived as white, a new study shows.

    These findings come from a report on the risks of bias in artificial intelligence tools published Wednesday by the non-profit Common Sense Media. Researchers specifically sought to evaluate the quality of AI teacher assistants — such as MagicSchool, Khanmingo, Curipod, and Google Gemini for Education — that are designed to support classroom planning, lesson differentiation, and administrative tasks.

    Common Sense Media found that while these tools could help teachers save time and streamline routine paperwork, AI-generated content could also promote bias in lesson planning and classroom management recommendations.

    Robbie Torney, senior director of AI programs at Common Sense Media, said the problems identified in the study are serious enough that ed tech companies should consider removing tools for behavior intervention plans until they can improve them. That’s significant because writing intervention plans of various sorts is a relatively common way teachers use AI.

    After Chalkbeat asked about Common Sense Media’s findings, a Google spokesperson said Tuesday that Google Classroom has turned off the shortcut to Gemini that prompts teachers to “Generate behavior intervention strategies” to do additional testing.

    However, both MagicSchool and Google, the two platforms where Common Sense Media identified racial bias in AI-generated behavior intervention plans, said they could not replicate Common Sense Media’s findings. They also said they take bias seriously and are working to improve their models.

    School districts across the country have been working to implement comprehensive AI policies to encourage informed use of these tools. OpenAI, Anthropic, and Microsoft have partnered with the American Federation of Teachers to provide free training in using AI platforms. The Trump Administration also has encouraged greater AI integration in the classroom. However, recent AI guidelines released by the U.S. Department of Education have not directly addressed concerns about bias within these systems.

    About a third of teachers report using AI at least weekly, according to a national survey conducted by the Walton Family Foundation in cooperation with Gallup. A separate survey conducted by the research organization Rand found teachers specifically report using these tools to help develop goals for Individualized Education Program — or IEP — plans. They also say they use these tools to shape lessons or assessments around those goals, and to brainstorm ways to accommodate students with disabilities.

    Torney said Common Sense Media isn’t trying to discourage teachers from using AI in general. The goal of the report is to encourage more awareness of potential uses of AI teacher assistants that might have greater risks in the classroom.

    “We really just want people to go in eyes wide open and say, ‘Hey these are some of the things that they’re best at and these are some of the things you probably want to be a little bit more careful with,’” he said.

    Common Sense Media identified AI tools that can generate IEPs and behavior intervention plans as high risk due to their biased treatment of students in the classroom. Using MagicSchool’s Behavior Intervention Suggestions tool and the Google Gemini “Generate behavior intervention strategies tool,” Common Sense Media’s research team ran the same prompt about a student who struggled with reading and showed aggressive behavior 50 times using white-coded names and 50 times using Black-coded names, evenly split between male- and female-coded names.

    The AI-generated plans for the students with Black-coded names didn’t all appear negative in isolation. But clear differences emerged when those plans from MagicSchool and Gemini were compared with plans for students with white-coded names.

    For example, when prompted to provide a behavior intervention plan for Annie, Gemini emphasized addressing aggressive behavior with “consistent non-escalating responses” and “consistent positive reinforcement.” Lakeesha, on the other hand, should receive “immediate” responses to her aggressive behaviors and positive reinforcement for “desired behaviors,” the tool said. For Kareem, Gemini simply said, “Clearly define expectations and teach replacement behaviors,” with no mention of positive reinforcement or responses to aggressive behavior.

    Torney noted that the problems in these AI-generated reports only became apparent across a large sample, which can make it hard for teachers to identify. The report warns that novice teachers may be more likely to rely on AI-generated content without the experience to catch inaccuracies or biases. Torney said these underlying biases in intervention plans “could have really large impacts on student progression or student outcomes as they move across their educational trajectory.”

    Black students are already subject to higher rates of suspension than their white counterparts in schools and more likely to receive harsher disciplinary consequences for subjective reasons, like “disruptive behavior.” Machine learning algorithms replicate the decision-making patterns of the training data that they are provided, which can perpetuate existing inequalities. A separate study found that AI tools replicate existing racial bias when grading essays, assigning lower scores to Black students than to Asian students.

    The Common Sense Media report also identified instances when AI teacher assistants generated lesson plans that relied on stereotypes, repeated misinformation, and sanitized controversial aspects of history.

    A Google spokesperson said the company has invested in using diverse and representative training data to minimize bias and overgeneralizations.

    “We use rigorous testing and monitoring to identify and stop potential bias in our AI models,” the Google spokesperson said in an email to Chalkbeat. “We’ve made good progress, but we’re always aiming to make improvements with our training techniques and data.”

    On its website, MagicSchool promotes its AI teaching assistant as “an unbiased tool to aid in decision-making for restorative practices.” In an email to Chalkbeat, MagicSchool said it has not been able to reproduce the issues that Common Sense Media identified.

    MagicSchool said their platform includes bias warnings and instructs users not to include student names or other identifying information when using AI features. In light of the study, it is working with Common Sense to improve its bias detection systems and design tools in ways that encourage educators to review AI generated content more closely.

    “As noted in the study, AI tools like ours hold tremendous promise — but also carry real risks if not designed, deployed, and used responsibly,” MagicSchool told Chalkbeat. “We are grateful to Common Sense Media for helping hold the field accountable.”

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on AI, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What we lose when AI replaces teachers

    What we lose when AI replaces teachers

    Key points:

    A colleague of ours recently attended an AI training where the opening slide featured a list of all the ways AI can revolutionize our classrooms. Grading was listed at the top. Sure, AI can grade papers in mere seconds, but should it?

    As one of our students, Jane, stated: “It has a rubric and can quantify it. It has benchmarks. But that is not what actually goes into writing.” Our students recognize that AI cannot replace the empathy and deep understanding that recognizes the growth, effort, and development of their voice. What concerns us most about grading our students’ written work with AI is the transformation of their audience from human to robot.

    If we teach our students throughout their writing lives that what the grading robot says matters most, then we are teaching them that their audience doesn’t matter. As Wyatt, another student, put it: “If you can use AI to grade me, I can use AI to write.” NCTE, in its position statements for Generative AI, reminds us that writing is a human act, not a mechanical one. Reducing it to automated scores undermines its value and teaches students, like Wyatt and Jane, that the only time we write is for a grade. That is a future of teaching writing we hope to never see.

    We need to pause when tech companies tout AI as the grader of student writing. This isn’t a question of capability. AI can score essays. It can be calibrated to rubrics. It can, as Jane

    said, provide students with encouragement and feedback specific to their developing skills. And we have no doubt it has the potential to make a teacher’s grading life easier. But just because we can outsource some educational functions to technology doesn’t mean we should.

    It is bad enough how many students already see their teacher as their only audience. Or worse, when students are writing for teachers who see their written work strictly through the lens of a rubric, their audience is limited to the rubric. Even those options are better than writing for a bot. Instead, let’s question how often our students write to a broader audience of their peers, parents, community, or a panel of judges for a writing contest. We need to reengage with writing as a process and implement AI as a guide or aide rather than a judge with the last word on an essay score.

    Our best foot forward is to put AI in its place. The use of AI in the writing process is better served in the developing stages of writing. AI is excellent as a guide for brainstorming. It can help in a variety of ways when a student is struggling and looking for five alternatives to their current ending or an idea for a metaphor. And if you or your students like AI’s grading feature, they can paste their work into a bot for feedback prior to handing it in as a final draft.

    We need to recognize that there are grave consequences if we let a bot do all the grading. As teachers, we should recognize bot grading for what it is: automated education. We can and should leave the promises of hundreds of essays graded in an hour for the standardized test providers. Our classrooms are alive with people who have stories to tell, arguments to make, and research to conduct. We see our students beyond the raw data of their work. We recognize that the poem our student has written for their sick grandparent might be a little flawed, but it matters a whole lot to the person writing it and to the person they are writing it for. We see the excitement or determination in our students’ eyes when they’ve chosen a research topic that is important to them. They want their cause to be known and understood by others, not processed and graded by a bot.

    The adoption of AI into education should be conducted with caution. Many educators are experimenting with using AI tools in thoughtful and student-centered ways. In a recent article, David Cutler describes his experience using an AI-assisted platform to provide feedback on his students’ essays. While Cutler found the tool surprisingly accurate and helpful, the true value lies in the feedback being used as part of the revision process. As this article reinforces, the role of a teacher is not just to grade, but to support and guide learning. When used intentionally (and we emphasize, as in-process feedback) AI can enhance that learning, but the final word, and the relationship behind it, must still come from a human being.

    When we hand over grading to AI, we risk handing over something much bigger–our students’ belief that their words matter and deserve an audience. Our students don’t write to impress a rubric, they write to be heard. And when we replace the reader with a robot, we risk teaching our students that their voices only matter to the machine. We need to let AI support the writing process, not define the product. Let it offer ideas, not deliver grades. When we use it at the right moments and for the right reasons, it can make us better teachers and help our students grow. But let’s never confuse efficiency with empathy. Or algorithms with understanding.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Key points:

    In 2025, schools are sitting on more data than ever before. Student records, attendance, health information, behavioral logs, and digital footprints generated by edtech tools have turned K-12 institutions into data-rich environments. As artificial intelligence becomes a central part of the learning experience, these data streams are being processed in increasingly complex ways. But with this complexity comes a critical question: Are schools doing enough to protect that data?

    The answer, in many cases, is no.

    The rise of shadow AI

    According to CoSN’s May 2025 State of EdTech District Leadership report, a significant portion of districts, specifically 43 percent, lack formal policies or guidance for AI use. While 80 percent of districts have generative AI initiatives underway, this policy gap is a major concern. At the same time, Common Sense Media’s Teens, Trust and Technology in the Age of AI highlights that many teens have been misled by fake content and struggle to discern truth from misinformation, underscoring the broad adoption and potential risks of generative AI.

    This lack of visibility and control has led to the rise of what many experts call “shadow AI”: unapproved apps and browser extensions that process student inputs, store them indefinitely, or reuse them to train commercial models. These tools are often free, widely adopted, and nearly invisible to IT teams. Shadow AI expands the district’s digital footprint in ways that often escape policy enforcement, opening the door to data leakage and compliance violations. CoSN’s 2025 report specifically notes that “free tools that are downloaded in an ad hoc manner put district data at risk.”

    Data protection: The first pillar under pressure

    The U.S. Department of Education’s AI Toolkit for Schools urges districts to treat student data with the same care as medical or financial records. However, many AI tools used in classrooms today are not inherently FERPA-compliant and do not always disclose where or how student data is stored. Teachers experimenting with AI-generated lesson plans or feedback may unknowingly input student work into platforms that retain or share that data. In the absence of vendor transparency, there is no way to verify how long data is stored, whether it is shared with third parties, or how it might be reused. FERPA requires that if third-party vendors handle student data on behalf of the institution, they must comply with FERPA. This includes ensuring data is not used for unintended purposes or retained for AI training.

    Some tools, marketed as “free classroom assistants,” require login credentials tied to student emails or learning platforms. This creates additional risks if authentication mechanisms are not protected or monitored. Even widely-used generative tools may include language in their privacy policies allowing them to use uploaded content for system training or performance optimization.

     

    Data processing and the consent gap

    Generative AI models are trained on large datasets, and many free tools continue learning from user prompts. If a student pastes an essay or a teacher includes student identifiers in a prompt, that information could enter a commercial model’s training loop. This creates a scenario where data is being processed without explicit consent, potentially in violation of COPPA (Children’s Online Privacy Protection Act) and FERPA. While the FTC’s December 2023 update to the COPPA Rule did not codify school consent provisions, existing guidance still allows schools to consent to technology use on behalf of parents in educational contexts. However, the onus remains on schools to understand and manage these consent implications, especially with the rule’s new amendments becoming effective June 21, 2025, which strengthen protections and require separate parental consent for third-party disclosures for targeted advertising.

    Moreover, many educators and students are unaware of what constitutes “personally identifiable information” (PII) in these contexts. A name combined with a school ID number, disability status, or even a writing sample could easily identify a student, especially in small districts. Without proper training, well-intentioned AI use can cross legal lines unknowingly.

    Cybersecurity risks multiply

    AI tools have also increased the attack surface of K-12 networks. According to ThreatDown’s 2024 State of Ransomware in Education report, ransomware attacks on K-12 schools increased by 92 percent between 2022 and 2023, with 98 total attacks in 2023. This trend is projected to continue as cybercriminals use AI to create more targeted phishing campaigns and detect system vulnerabilities faster. AI-assisted attacks can mimic human language and tone, making them harder to detect. Some attackers now use large language models to craft personalized emails that appear to come from school administrators.

    Many schools lack endpoint protection for student devices, and third-party integrations often bypass internal firewalls. Free AI browser extensions may collect keystrokes or enable unauthorized access to browser sessions. The more tools that are introduced without IT oversight, the harder it becomes to isolate and contain incidents when they occur. CoSN’s 2025 report indicates that 60 percent of edtech leaders are “very concerned about AI-enabled cyberattacks,” yet 61 percent still rely on general funds for cybersecurity efforts, not dedicated funding.

    Building a responsible framework

    To mitigate these risks, school leaders need to:

    • Audit tool usage using platforms like Lightspeed Digital Insight to identify AI tools being accessed without approval. Districts should maintain a living inventory of all digital tools. Lightspeed Digital Insight, for example, is vetted by 1EdTech for data privacy.
    • Develop and publish AI use policies that clarify acceptable practices, define data handling expectations, and outline consequences for misuse. Policies should distinguish between tools approved for instructional use and those requiring further evaluation.
    • Train educators and students to understand how AI tools collect and process data, how to interpret AI outputs critically, and how to avoid inputting sensitive information. AI literacy should be embedded in digital citizenship curricula, with resources available from organizations like Common Sense Media and aiEDU.
    • Vet all third-party apps through standards like the 1EdTech TrustEd Apps program. Contracts should specify data deletion timelines and limit secondary data use. The TrustEd Apps program has vetted over 12,000 products, providing a valuable resource for districts.
    • Simulate phishing attacks and test breach response protocols regularly. Cybersecurity training should be required for staff, and recovery plans must be reviewed annually.

    Trust starts with transparency

    In the rush to embrace AI, schools must not lose sight of their responsibility to protect students’ data and privacy. Transparency with parents, clarity for educators, and secure digital infrastructure are not optional. They are the baseline for trust in the age of algorithmic learning.

    AI can support personalized learning, but only if we put safety and privacy first. The time to act is now. Districts that move early to build policies, offer training, and coordinate oversight will be better prepared to lead AI adoption with confidence and care.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • National AI training hub for educators to open, funded by OpenAI and Microsoft

    National AI training hub for educators to open, funded by OpenAI and Microsoft

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    More than 400,000 K-12 educators across the country will get free training in AI through a $23 million partnership between a major teachers union and leading tech companies that is designed to close gaps in the use of technology and provide a national model for AI-integrated curriculum.

    The new National Academy for AI Instruction will be based in the downtown Manhattan headquarters of the United Federation of Teachers, the New York City affiliate of the American Federation of Teachers, and provide workshops, online courses, and hands-on training sessions. This hub-based model of teacher training was inspired by work of unions like the United Brotherhood of Carpenters that have created similar training centers with industry partners, according to AFT President Randi Weingarten.

    “Teachers are facing huge challenges, which include navigating AI wisely, ethically and safely,” Weingarten said at a press conference Tuesday announcing the initiative. “The question was whether we would be chasing it or whether we would be trying to harness it.”

    The initiative involves the AFT, UFT, OpenAI, Microsoft, and Anthropic.

    The Trump administration has encouraged AI integration in the classroom. More than 50 companies have signed onto a White House pledge to provide grants, education materials, and technology to invest in AI education.

    In the wake of federal funding cuts to public education and the impact of Trump’s sweeping tax and policy bill on schools, Weingarten sees this partnership with private tech companies as a crucial investment in teacher preparation.

    “We are actually ensuring that kids have, that teachers have, what they need to deal with the economy of today and tomorrow,” Weingarten said.

    The academy will be based in a city where the school system initially banned the use of AI in the classroom, claiming it would interfere with the development of critical thinking skills. A few months later, then-New York City schools Chancellor David Banks did an about-face, pledging to help schools smartly incorporate the technology. He said New York City schools would embrace the potential of AI to drive individualized learning. But concrete plans have been limited.

    The AFT, meanwhile, has tried to position itself as a leader in the field. Last year, the union released its own guidelines for AI use in the classroom and funded pilot programs around the country.

    Vincent Plato, New York City Public Schools K-8 educator and UFT Teacher Center director, said the advent of AI reminds him of when teachers first started using word processors.

    “We are watching educators transform the way people use technology for work in real time, but with AI it’s on another unbelievable level because it’s just so much more powerful,” he said in a press release announcing the new partnership. “It can be a thought partner when they’re working by themselves, whether that’s late-night lesson planning, looking at student data or filing any types of reports — a tool that’s going to be transformative for teachers and students alike.”

    Teachers who frequently use AI tools report saving 5.9 hours a week, according to a national survey conducted by the Walton Family Foundation in cooperation with Gallup. These tools are most likely to be used to support instructional planning, such as creating worksheets or modifying material to meet students’ needs. Half of the teachers surveyed stated that they believe AI will reduce teacher workloads.

    “Teachers are not only gaining back valuable time, they are also reporting that AI is helping to strengthen the quality of their work,” Stephanie Marken, senior partner for U.S. research at Gallup, said in a press release. “However, a clear gap in AI adoption remains. Schools need to provide the tools, training, and support to make effective AI use possible for every teacher.”

    While nearly half of school districts surveyed by the research corporation RAND have reported training teachers in utilizing AI-powered tools by fall 2024, high-poverty districts are still lagging behind their low poverty counterparts. District leaders across the nation report a scarcity of external experts and resources to provide quality AI training to teachers.

    OpenAI, a founding partner of the National Academy for AI Instruction, will contribute $10 million over the next five years. The tech company will provide educators and course developers with technical support to integrate AI into classrooms as well as software applications to build custom, classroom-specific tools.

    Tech companies would benefit from this partnership by “co-creating” and improving their products based on feedback and insights from educators, said Gerry Petrella, Microsoft general manager, U.S. public policy, who hopes the initiative will align the needs of educators with the work of developers.

    In a sense, the teachers are training AI products just as much as they are being trained, according to Kathleen Day, a lecturer at Johns Hopkins Carey Business School. Day emphasized that through this partnership, AI companies would gain access to constant input from educators so they could continually strengthen their models and products.

    “Who’s training who?” Day said. “They’re basically saying, we’ll show you how this technology works, and you tell us how you would use it. When you tell us how you would use it, that is a wealth of information.”

    Many educators and policymakers are also concerned that introducing AI into the classroom could endanger student data and privacy. Racial bias in grading could also be reinforced by AI programs, according to research by The Learning Agency.

    Additionally, Trevor Griffey, a lecturer in labor studies at the University of California Los Angeles, warned the New York Times that tech firms could use these deals to market AI tools to students and expand their customer base.

    This initiative to expand AI access and training for educators was likened to New Deal efforts in the 1930s to expand equal access to electricity by Chris Lehane, OpenAI’s chief global affairs officer. By working with teachers and expanding AI training, Lehane hopes the initiative will “democratize” access to AI.

    “There’s no better place to do that work than in the classroom,” he said at the Tuesday press conference.

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on AI training, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Common Sense Media releases AI toolkit for school districts

    Common Sense Media releases AI toolkit for school districts

    Key points:

    Common Sense Media has released its first AI Toolkit for School Districts, which gives districts of all sizes a structured, action-oriented guide for implementing AI safely, responsibly, and effectively.

    Common Sense Media research shows that 7 in 10 teens have used AI. As kids and teens increasingly use the technology for schoolwork, teachers and school district leaders have made it clear that they need practical, easy-to-use tools that support thoughtful AI planning, decision-making, and implementation.

    Common Sense Media developed the AI Toolkit, which is available to educators free of charge, in direct response to district needs.

    “As more and more kids use AI for everything from math homework to essays, they’re often doing so without clear expectations, safeguards, or support from educators,” said Yvette Renteria, Chief Program Officer of Common Sense Media.

    “Our research shows that schools are struggling to keep up with the rise of AI–6 in 10 kids say their schools either lack clear AI rules or are unsure what those rules are. But schools shouldn’t have to navigate the AI paradigm shift on their own. Our AI Toolkit for School Districts will make sure every district has the guidance it needs to implement AI in a way that works best for its schools.”

    The toolkit emphasizes practical tools, including templates, implementation guides, and customizable resources to support districts at various stages of AI exploration and adoption. These resources are designed to be flexible to ensure that each district can develop AI strategies that align with their unique missions, visions, and priorities.

    In addition, the toolkit stresses the importance of a community-driven approach, recognizing that AI exploration and decision-making require input from all of the stakeholders in a school community.

    By encouraging districts to give teachers, students, parents, and more a seat at the table, Common Sense Media’s new resources ensure that schools’ AI plans meet the needs of families and educators alike.

    This press release originally appeared online.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link