Category: AI in Education

  • Students must intentionally develop durable skills to thrive in an AI-dominated world

    Students must intentionally develop durable skills to thrive in an AI-dominated world

    Key points:

    As AI increasingly automates technical tasks across industries, students’ long-term career success will rely less on technical skills alone and more on durable skills or professional skills, often referred to as soft skills. These include empathy, resilience, collaboration, and ethical reasoning–skills that machines can’t replicate.

    This critical need is outlined in Future-Proofing Students: Professional Skills in the Age of AI, a new report from Acuity Insights. Drawing on a broad body of academic and market research, the report provides an analysis of how institutions can better prepare students with the professional skills most critical in an AI-driven world.

    Key findings from the report:

    • 75 percent of long-term job success is attributed to professional skills, not technical expertise.
    • Over 25 percent of executives say they won’t hire recent graduates due to lack of durable skills.
    • COVID-19 disrupted professional skill development, leaving many students underprepared for collaboration, communication, and professional norms.
    • Eight essential durable skills must be intentionally developed for students to thrive in an AI-driven workplace.

    “Technical skills may open the door, but it’s human skills like empathy and resilience that endure over time and lead to a fruitful and rewarding career,” says Matt Holland, CEO at Acuity Insights. “As AI reshapes the workforce, it has become critical for higher education to take the lead in preparing students with these skills that will define their long-term success.”

    The eight critical durable skills include:

    • Empathy
    • Teamwork
    • Communication
    • Motivation
    • Resilience
    • Ethical reasoning
    • Problem solving
    • Self-awareness

    These competencies don’t expire with technology–they grow stronger over time, helping graduates adapt, lead, and thrive in an AI-driven world.

    The report also outlines practical strategies for institutions, including assessing non-academic skills at admissions using Situational Judgment Tests (SJTs), and shares recommendations on embedding professional skills development throughout curricula and forming partnerships that bridge AI literacy with interpersonal and ethical reasoning.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Preparing for a new era of teaching and learning

    Preparing for a new era of teaching and learning

    Key points:

    When I first started experimenting with AI in my classroom, I saw the same thing repeatedly from students. They treated it like Google. Ask a question, get an answer, move on. It didn’t take long to realize that if my students only engage with AI this way, they miss the bigger opportunity to use AI as a partner in thinking. AI isn’t a magic answer machine. It’s a tool for creativity and problem-solving. The challenge for us as educators is to rethink how we prepare students for the world they’re entering and to use AI with curiosity and fidelity.

    Moving from curiosity to fluency

    In my district, I wear two hats: history teacher and instructional coach. That combination gives me the space to test ideas in the classroom and support colleagues as they try new tools. What I’ve learned is that AI fluency requires far more than knowing how to log into a platform. Students need to learn how to question outputs, verify information and use results as a springboard for deeper inquiry.

    I often remind them, “You never trust your source. You always verify and compare.” If students accept every AI response at face value, they’re not building the critical habits they’ll need in college or in the workforce.

    To make this concrete, I teach my students the RISEN framework: Role, Instructions, Steps, Examples, Narrowing. It helps them craft better prompts and think about the kind of response they want. Instead of typing “explain photosynthesis,” they might ask, “Act as a biologist explaining photosynthesis to a tenth grader. Use three steps with an analogy, then provide a short quiz at the end.” Suddenly, the interaction becomes purposeful, structured and reflective of real learning.

    AI as a catalyst for equity and personalization

    Growing up, I was lucky. My mom was college educated and sat with me to go over almost every paper I wrote. She gave me feedback that helped to sharpen my writing and build my confidence. Many of my students don’t have that luxury. For these learners, AI can be the academic coach they might not otherwise have.

    That doesn’t mean AI replaces human connection. Nothing can. But it can provide feedback, ask guiding questions, and provide examples that give students a sounding board and thought partner. It’s one more way to move closer to providing personalized support for learners based on need.

    Of course, equity cuts both ways. If only some students have access to AI or if we use it without considering its bias, we risk widening the very gaps we hope to close. That’s why it’s our job as educators to model ethical and critical use, not just the mechanics.

    Shifting how we assess learning

    One of the biggest shifts I’ve made is rethinking how I assess students. If I only grade the final product, I’m essentially inviting them to use AI as a shortcut. Instead, I focus on the process: How did they engage with the tool? How did they verify and cross-reference results? How did they revise their work based on what they learned? What framework guided their inquiry? In this way, AI becomes part of their learning journey rather than just an endpoint.

    I’ve asked students to run the same question through multiple AI platforms and then compare the outputs. What were the differences? Which response feels most accurate or useful? What assumptions might be at play? These conversations push students to defend their thinking and use AI critically, not passively.

    Navigating privacy and policy

    Another responsibility we carry as educators is protecting our students. Data privacy is a serious concern. In my school, we use a “walled garden” version of AI so that student data doesn’t get used for training. Even with those safeguards in place, I remind colleagues never to enter identifiable student information into a tool.

    Policies will continue to evolve, but for day-to-day activities and planning, teachers need to model caution and responsibility. Students are taking our lead.

    Professional growth for a changing profession

    The truth of the matter is most of us have not been professionally trained to do this. My teacher preparation program certainly did not include modules on prompt engineering or data ethics. That means professional development in this space is a must.

    I’ve grown the most in my AI fluency by working alongside other educators who are experimenting, sharing stories, and comparing notes. AI is moving fast. No one has all the answers. But we can build confidence together by trying, reflecting, and adjusting through shared experience and lessons learned. That’s exactly what we’re doing in the Lead for Learners network. It’s a space where educators from across the country connect, learn and support one another in navigating change.

    For educators who feel hesitant, I’d say this: You don’t need to be an expert to start. Pick one tool, test it in one lesson, and talk openly with your students about what you’re learning. They’ll respect your honesty and join you in the process.

    Preparing students for what’s next

    AI is not going away. Whether we’re ready or not, it’s going to shape how our students live and work. That gives us a responsibility not just to keep pace with technology but to prepare young people for what’s ahead. The latest futures forecast reminds us that imagining possibilities is just as important as responding to immediate shifts.

    We need to understand both how AI is already reshaping education delivery and how new waves of change will remain on the horizon as tools grow more sophisticated and widespread.

    I want my students to leave my classroom with the ability to question, create, and collaborate using AI. I want them to see it not as a shortcut but as a tool for thinking more deeply and expressing themselves more fully. And I want them to watch me modeling those same habits: curiosity, caution, creativity, and ethical decision-making. Because if we don’t show them what responsible use looks like, who will?

    The future of education won’t be defined by whether we allow AI into our classrooms. It will be defined by how we teach with it, how we teach about it, and how we prepare our students to thrive in a world where it’s everywhere.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Preserving critical thinking amid AI adoption

    Preserving critical thinking amid AI adoption

    Key points:

    AI is now at the center of almost every conversation in education technology. It is reshaping how we create content, build assessments, and support learners. The opportunities are enormous. But one quiet risk keeps growing in the background: losing our habit of critical thinking.

    I see this risk not as a theory but as something I have felt myself.

    The moment I almost outsourced my judgment

    A few months ago, I was working on a complex proposal for a client. Pressed for time, I asked an AI tool to draft an analysis of their competitive landscape. The output looked polished and convincing. It was tempting to accept it and move on.

    Then I forced myself to pause. I began questioning the sources behind the statements and found a key market shift the model had missed entirely. If I had skipped that short pause, the proposal would have gone out with a blind spot that mattered to the client.

    That moment reminded me that AI is fast and useful, but the responsibility for real thinking is still mine. It also showed me how easily convenience can chip away at judgment.

    AI as a thinking partner

    The most powerful way to use AI is to treat it as a partner that widens the field of ideas while leaving the final call to us. AI can collect data in seconds, sketch multiple paths forward, and expose us to perspectives we might never consider on our own.

    In my own work at Magic EdTech, for example, our teams have used AI to quickly analyze thousands of pages of curriculum to flag accessibility issues. The model surfaces patterns and anomalies that would take a human team weeks to find. Yet the real insight comes when we bring educators and designers together to ask why those patterns matter and how they affect real classrooms. AI sets the table, but we still cook the meal.

    There is a subtle but critical difference between using AI to replace thinking and using it to stretch thinking. Replacement narrows our skills over time. Stretching builds new mental flexibility. The partner model forces us to ask better questions, weigh trade-offs, and make calls that only human judgment can resolve.

    Habits to keep your edge

    Protecting critical thinking is not about avoiding AI. It is about building habits that keep our minds active when AI is everywhere.

    Here are three I find valuable:

    1. Name the fragile assumption
    Each time you receive AI output, ask: What is one assumption here that could be wrong? Spend a few minutes digging into that. It forces you to reenter the problem space instead of just editing machine text.

    2. Run the reverse test
    Before you adopt an AI-generated idea, imagine the opposite. If the model suggests that adaptive learning is the key to engagement, ask: What if it is not? Exploring the counter-argument often reveals gaps and deeper insights.

    3. Slow the first draft
    It is tempting to let AI draft emails, reports, or code and just sign off. Instead, start with a rough human outline first. Even if it is just bullet points, you anchor the work in your own reasoning and use the model to enrich–not originate–your thinking.

    These small practices keep the human at the center of the process and turn AI into a gym for the mind rather than a crutch.

    Why this matters for education

    For those of us in education technology, the stakes are unusually high. The tools we build help shape how students learn and how teachers teach. If we let critical thinking atrophy inside our companies, we risk passing that weakness to the very people we serve.

    Students will increasingly use AI for research, writing, and even tutoring. If the adults designing their digital classrooms accept machine answers without question, we send the message that surface-level synthesis is enough. We would be teaching efficiency at the cost of depth.

    By contrast, if we model careful reasoning and thoughtful use of AI, we can help the next generation see these tools for what they are: accelerators of understanding, not replacements for it. AI can help us scale accessibility, personalize instruction, and analyze learning data in ways that were impossible before. But its highest value appears only when it meets human curiosity and judgment.

    Building a culture of shared judgment

    This is not just an individual challenge. Teams need to build rituals that honor slow thinking in a fast AI environment. Another practice is rotating the role of “critical friend” in meetings. One person’s task is to challenge the group’s AI-assisted conclusions and ask what could go wrong. This simple habit trains everyone to keep their reasoning sharp.

    Next time you lean on AI for a key piece of work, pause before you accept the answer. Write down two decisions in that task that only a human can make. It might be about context, ethics, or simple gut judgment. Then share those reflections with your team. Over time this will create a culture where AI supports wisdom rather than diluting it.

    The real promise of AI is not that it will think for us, but that it will free us to think at a higher level.

    The danger is that we may forget to climb.

    The future of education and the integrity of our own work depend on remaining climbers. Let the machines speed the climb, but never let them choose the summit.

    Laura Ascione
    Latest posts by Laura Ascione (see all)

    Source link

  • Why busy educators need AI with guardrails

    Why busy educators need AI with guardrails

    Key points:

    In the growing conversation around AI in education, speed and efficiency often take center stage, but that focus can tempt busy educators to use what’s fast rather than what’s best. To truly serve teachers–and above all, students–AI must be built with intention and clear constraints that prioritize instructional quality, ensuring efficiency never comes at the expense of what learners need most.

    AI doesn’t inherently understand fairness, instructional nuance, or educational standards. It mirrors its training and guidance, usually as a capable generalist rather than a specialist. Without deliberate design, AI can produce content that’s misaligned or confusing. In education, fairness means an assessment measures only the intended skill and does so comparably for students from different backgrounds, languages, and abilities–without hidden barriers unrelated to what’s being assessed. Effective AI systems in schools need embedded controls to avoid construct‑irrelevant content: elements that distract from what’s actually being measured.

    For example, a math question shouldn’t hinge on dense prose, niche sports knowledge, or culturally-specific idioms unless those are part of the goal; visuals shouldn’t rely on low-contrast colors that are hard to see; audio shouldn’t assume a single accent; and timing shouldn’t penalize students if speed isn’t the construct.

    To improve fairness and accuracy in assessments:

    • Avoid construct-irrelevant content: Ensure test questions focus only on the skills and knowledge being assessed.
    • Use AI tools with built-in fairness controls: Generic AI models may not inherently understand fairness; choose tools designed specifically for educational contexts.
    • Train AI on expert-authored content: AI is only as fair and accurate as the data and expertise it’s trained on. Use models built with input from experienced educators and psychometricians.

    These subtleties matter. General-purpose AI tools, left untuned, often miss them.

    The risk of relying on convenience

    Educators face immense time pressures. It’s tempting to use AI to quickly generate assessments or learning materials. But speed can obscure deeper issues. A question might look fine on the surface but fail to meet cognitive complexity standards or align with curriculum goals. These aren’t always easy problems to spot, but they can impact student learning.

    To choose the right AI tools:

    • Select domain-specific AI over general models: Tools tailored for education are more likely to produce pedagogically-sound and standards-aligned content that empowers students to succeed. In a 2024 University of Pennsylvania study, students using a customized AI tutor scored 127 percent higher on practice problems than those without.
    • Be cautious with out-of-the-box AI: Without expertise, educators may struggle to critique or validate AI-generated content, risking poor-quality assessments.
    • Understand the limitations of general AI: While capable of generating content, general models may lack depth in educational theory and assessment design.

    General AI tools can get you 60 percent of the way there. But that last 40 percent is the part that ensures quality, fairness, and educational value. This requires expertise to get right. That’s where structured, guided AI becomes essential.

    Building AI that thinks like an educator

    Developing AI for education requires close collaboration with psychometricians and subject matter experts to shape how the system behaves. This helps ensure it produces content that’s not just technically correct, but pedagogically sound.

    To ensure quality in AI-generated content:

    • Involve experts in the development process: Psychometricians and educators should review AI outputs to ensure alignment with learning goals and standards.
    • Use manual review cycles: Unlike benchmark-driven models, educational AI requires human evaluation to validate quality and relevance.
    • Focus on cognitive complexity: Design assessments with varied difficulty levels and ensure they measure intended constructs.

    This process is iterative and manual. It’s grounded in real-world educational standards, not just benchmark scores.

    Personalization needs structure

    AI’s ability to personalize learning is promising. But without structure, personalization can lead students off track. AI might guide learners toward content that’s irrelevant or misaligned with their goals. That’s why personalization must be paired with oversight and intentional design.

    To harness personalization responsibly:

    • Let experts set goals and guardrails: Define standards, scope and sequence, and success criteria; AI adapts within those boundaries.
    • Use AI for diagnostics and drafting, not decisions: Have it flag gaps, suggest resources, and generate practice, while educators curate and approve.
    • Preserve curricular coherence: Keep prerequisites, spacing, and transfer in view so learners don’t drift into content that’s engaging but misaligned.
    • Support educator literacy in AI: Professional development is key to helping teachers use AI effectively and responsibly.

    It’s not enough to adapt–the adaptation must be meaningful and educationally coherent.

    AI can accelerate content creation and internal workflows. But speed alone isn’t a virtue. Without scrutiny, fast outputs can compromise quality.

    To maintain efficiency and innovation:

    • Use AI to streamline internal processes: Beyond student-facing tools, AI can help educators and institutions build resources faster and more efficiently.
    • Maintain high standards despite automation: Even as AI accelerates content creation, human oversight is essential to uphold educational quality.

    Responsible use of AI requires processes that ensure every AI-generated item is part of a system designed to uphold educational integrity.

    An effective approach to AI in education is driven by concern–not fear, but responsibility. Educators are doing their best under challenging conditions, and the goal should be building AI tools that support their work.

    When frameworks and safeguards are built-in, what reaches students is more likely to be accurate, fair, and aligned with learning goals.

    In education, trust is foundational. And trust in AI starts with thoughtful design, expert oversight, and a deep respect for the work educators do every day.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • 5 essential AI tech tools for back-to-school success

    5 essential AI tech tools for back-to-school success

    Key points:

    By now, the 2025-2026 school year is well underway. The glow of new beginnings has faded, and the process of learning has begun in earnest. No doubt there is plenty to do, but I recommend that educators take a moment and check in on their teaching toolkit.

    The tools of our trade are always evolving, and if our students are going to get the most out of their time in class, it’s important for us to familiarize ourselves with the newest resources for sparking curiosity, creativity, and critical thinking. This includes the latest AI programs that are making their way into the classroom.  

    Here are five AI tech tools that I believe are essential for back-to-school success: 

    1. ChatGPT: ChatGPT has quickly become the all-in-one tool for generating anything and everything. Many educators are (rightly) concerned about ChatGPT’s potential for student cheating, but this AI can also serve as a built-in assistant for creating welcome letters, student-friendly syllabi, and other common documents for the classroom. If it’s used responsibly, ChatGPT can assist teachers by cutting out the busy work involved when planning and implementing lessons.   
    2. ClassroomScreen: ClassroomScreen functions as a modern-day chalkboard. This useful tool lets teachers project a variety of information on screen while simultaneously performing classroom tasks. Teachers can take straw polls, share inspiring quotes, detail the morning schedule, and even monitor volume without opening a single tab. It’s a simple, multipurpose tool for classroom coordination.     
    3. SchoolAI: SchoolAI is a resource generator that provides safe, teacher-guided interactions between students and AI. With AI becoming increasingly common, it’s vital that students are taught how to use it safely, effectively, and responsibly. SchoolAI can help with this task by cultivating student curiosity and critical thinking without doing the work for them. Best of all, teachers remain at the helm the entire time, ensuring an additional layer of instruction and protection.       
    4. Snorkl: Snorkl is a feedback tool, providing students with instant feedback on their responses. This AI program allows students to record their thinking process on a digital whiteboard using a variety of customizable tools. With Snorkl, a teacher could send students a question with an attached image, then have them respond using audio, visual tools such as highlighting, and much more. It’s the perfect way to inject a little creativity into a lesson while making it memorable, meaningful, and fun!   
    5. Suno: Suno is unique in that it specializes in creative song generation. Looking for an engaging way to teach fractions? Upload your lesson to Suno and it can generate a catchy, educational song in the style of your favorite artist. Suno even allows users to customize lyrics so that the songs stay relevant to the lesson at hand. If you need a resource that can get students excited about learning, then Suno will be the perfect addition to your teaching toolkit!

    The world of education is always changing, and today’s technology may be outdated within a matter of years. Still, the mission of educators remains the same: to equip students with the skills, determination, and growth mindset they need to thrive in an uncertain future. By integrating effective tools into the classroom, we can guide them toward a brighter tomorrow–one where inquiry and critical thinking continue to flourish, both within the classroom and beyond.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Are we outsourcing our thinking to AI?

    Are we outsourcing our thinking to AI?

    Key points:

    I’ll admit that I use AI. I’ve asked it to help me figure out challenging Excel formulas that otherwise would have taken me 45 minutes and a few tutorials to troubleshoot. I’ve used it to help me analyze or organize massive amounts of information. I’ve even asked it to help me devise a running training program aligning with my goals and fitting within my schedule. AI is a fantastic tool–and that’s the point. It’s a tool, not a replacement for thinking.

    As AI tools become more capable, more intuitive, and more integrated into our daily lives, I’ve found myself wondering: Are we growing too dependent on AI to do our thinking for us?

    This question isn’t just philosophical. It has real consequences, especially for students and young learners. A recent study published in the journal Societies reports that people who used AI tools consistently showed a decline in critical thinking performance. In fact, “whether someone used AI tools was a bigger predictor of a person’s thinking skills than any other factor, including educational attainment.” That’s a staggering finding because it suggests that using AI might not just be a shortcut. It could be a cognitive detour.

    The atrophy of the mind

    The term “digital dementia” has been used to describe the deterioration of cognitive abilities as a result of over-reliance on digital devices. It’s a phrase originally associated with excessive screen time and memory decline, but it’s found new relevance in the era of generative AI. When we depend on a machine to generate our thoughts, answer our questions, or write our essays, what happens to the neural pathways that govern our own critical thinking? And will the upcoming era of agentic AI expedite this decline?

    Cognitive function, like physical fitness, follows the rule of “use it or lose it.” Just as muscles weaken without regular use, the brain’s ability to evaluate, synthesize, and critique information can atrophy when not exercised. This is especially concerning in the context of education, where young learners are still building those critical neural pathways.

    In short: Students need to learn how to think before they delegate that thinking to a machine.

    Can you still think critically with AI?

    Yes, but only if you’re intentional about it.

    AI doesn’t relieve you of the responsibility to think–in many cases, it demands even more critical thinking. AI produces hallucinations, falsifies claims, and can be misleading. If you blindly accept AI’s output, you’re not saving time, you’re surrendering clarity.

    Using AI effectively requires discernment. You need to know what you’re asking, evaluate what you’re given, and verify the accuracy of the result. In other words, you need to think before, during, and after using AI.

    The “source, please” problem

    One of the simplest ways to teach critical thinking is also the most annoying–just ask my teenage daughter. When she presents a fact or claim that she saw online, I respond with some version of: “What’s your source?” It drives her crazy, but it forces her to dig deeper, check assumptions, and distinguish between fact and fiction. It’s an essential habit of mind.

    But here’s the thing: AI doesn’t always give you the source. And when it does, sometimes it’s wrong, or the source isn’t reputable. Sometimes it requires a deeper dive (and a few more prompts) to find answers, especially to complicated topics. AI often provides quick, confident answers that fall apart under scrutiny.

    So why do we keep relying on it? Why are AI responses allowed to settle arguments, or serve as “truth” for students when the answers may be anything but?

    The lure of speed and simplicity

    It’s easier. It’s faster. And let’s face it: It feels like thinking. But there’s a difference between getting an answer and understanding it. AI gives us answers. It doesn’t teach us how to ask better questions or how to judge when an answer is incomplete or misleading.

    This process of cognitive offloading (where we shift mental effort to a device) can be incredibly efficient. But if we offload too much, too early, we risk weakening the mental muscles needed for sustained critical thinking.

    Implications for educators

    So, what does this mean for the classroom?

    First, educators must be discerning about how they use AI tools. These technologies aren’t going away, and banning them outright is neither realistic nor wise. But they must be introduced with guardrails. Students need explicit instruction on how to think alongside AI, not instead of it.

    Second, teachers should emphasize the importance of original thought, iterative questioning, and evidence-based reasoning. Instead of asking students to simply generate answers, ask them to critique AI-generated ones. Challenge them to fact-check, source, revise, and reflect. In doing so, we keep their cognitive skills active and growing.

    And finally, for young learners, we may need to draw a harder line. Students who haven’t yet formed the foundational skills of analysis, synthesis, and evaluation shouldn’t be skipping those steps. Just like you wouldn’t hand a calculator to a child who hasn’t yet learned to add, we shouldn’t hand over generative AI tools to students who haven’t learned how to write, question, or reason.

    A tool, not a crutch

    AI is here to stay. It’s powerful, transformative, and, when used well, can enhance our work and learning. But we must remember that it’s a tool, not a replacement for human thought. The moment we let it think for us is the moment we start to lose the capacity to think for ourselves.

    If we want the next generation to be capable, curious, and critically-minded, we must protect and nurture those skills. And that means using AI thoughtfully, sparingly, and always with a healthy dose of skepticism. AI is certainly proving it has staying power, so it’s in all our best interests to learn to adapt. However, let’s adapt with intentionality, and without sacrificing our critical thinking skills or succumbing to any form of digital dementia.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • 5 ways to infuse AI into your classroom this school year

    5 ways to infuse AI into your classroom this school year

    Key points:

    As artificial intelligence (AI) continues to reshape the educational landscape, teachers have a unique opportunity to model how to use it responsibly, creatively, and strategically.

    Rather than viewing AI as a threat or distraction, we can reframe it as a tool for empowerment and efficiency–one that allows us to meet student needs in more personalized, inclusive, and imaginative ways. Whether you’re an AI beginner or already experimenting with generative tools, here are five ways to infuse AI into your classroom this school year:

    1. Co-plan lessons with an AI assistant

    AI platforms like ChatGPT, Eduaide.ai, and MagicSchool.ai can generate lesson frameworks aligned to standards, differentiate tasks for diverse learners, and offer fresh ideas for student engagement. Teachers can even co-create activities with students by prompting AI together in real time.

    Try this: Ask your AI assistant to create a standards-aligned lesson that includes a formative check and a scaffold for ELLs–then adjust to your style and class needs.

    2. Personalize feedback without the time drain

    AI can streamline your feedback process by suggesting draft comments on student work based on rubrics you provide. This is particularly helpful for writing-intensive courses or project-based learning.

    Ethical reminder: Always review and personalize AI-generated feedback to maintain professional judgment and student trust.

    3. Support multilingual learners in real time

    AI tools like Google Translate, Microsoft Immersive Reader, and Read&Write can help bridge language gaps by offering simplified texts, translated materials, and visual vocabulary support.

    Even better: Teach students to use these tools independently to foster agency and access.

    4. Teach AI literacy as a 21st-century skill

    Students are already using AI–let’s teach them to use it well. Dedicate time to discuss how AI works, how to prompt effectively, and how to critically evaluate its outputs for bias, credibility, and accuracy.

    Try this mini-lesson: “3 Prompts, 3 Results.” Have students input the same research question into three AI tools and compare the results for depth, accuracy, and tone.

    5. Automate the tedious–refocus on relationships

    From generating rubrics and newsletters to drafting permission slips and analyzing formative assessment data, AI can reduce the clerical load. This frees up your most valuable resource: time.

    Pro tip: Use AI to pre-write behavior plans, follow-up emails, or even lesson exit ticket summaries.

    The future of AI

    AI won’t replace teachers–but teachers who learn how to use AI thoughtfully may find themselves with more energy, better tools, and deeper student engagement than ever before. As the school year begins, let’s lead by example and embrace AI not as a shortcut, but as a catalyst for growth.

    Latest posts by eSchool Media Contributors (see all)



    Source link

  • Human connection still drives school attendance

    Human connection still drives school attendance

    Key points:

    At ISTE this summer, I lost count of how many times I heard “AI” as the answer to every educational challenge imaginable. Student engagement? AI-powered personalization! Teacher burnout? AI lesson planning! Parent communication? AI-generated newsletters! Chronic absenteeism? AI predictive models! But after moderating a panel on improving the high school experience, which focused squarely on human-centered approaches, one district administrator approached us with gratitude: “Thank you for NOT saying AI is the solution.”

    That moment crystallized something important that’s getting lost in our rush toward technological fixes: While we’re automating attendance tracking and building predictive models, we’re missing the fundamental truth that showing up to school is a human decision driven by authentic relationships.

    The real problem: Students going through the motions

    The scope of student disengagement is staggering. Challenge Success, affiliated with Stanford’s Graduate School of Education, analyzed data from over 270,000 high school students across 13 years and found that only 13 percent are fully engaged in their learning. Meanwhile, 45 percent are what researchers call “doing school,” going through the motions behaviorally but finding little joy or meaning in their education.

    This isn’t a post-pandemic problem–it’s been consistent for over a decade. And it directly connects to attendance issues. The California Safe and Supportive Schools initiative has identified school connectedness as fundamental to attendance. When high schoolers have even one strong connection with a teacher or staff member who understands their life beyond academics, attendance improves dramatically.

    The districts that are addressing this are using data to enable more meaningful adult connections, not just adding more tech. One California district saw 32 percent of at-risk students improve attendance after implementing targeted, relationship-based outreach. The key isn’t automated messages, but using data to help educators identify disengaged students early and reach out with genuine support.

    This isn’t to discount the impact of technology. AI tools can make project-based learning incredibly meaningful and exciting, exactly the kind of authentic engagement that might tempt chronically absent high schoolers to return. But AI works best when it amplifies personal bonds, not seeks to replace them.

    Mapping student connections

    Instead of starting with AI, start with relationship mapping. Harvard’s Making Caring Common project emphasizes that “there may be nothing more important in a child’s life than a positive and trusting relationship with a caring adult.” Rather than leave these connections to chance, relationship mapping helps districts systematically identify which students lack that crucial adult bond at school.

    The process is straightforward: Staff identify students who don’t have positive relationships with any school adults, then volunteers commit to building stronger connections with those students throughout the year. This combines the best of both worlds: Technology provides the insights about who needs support, and authentic relationships provide the motivation to show up.

    True school-family partnerships to combat chronic absenteeism need structures that prioritize student consent and agency, provide scaffolding for underrepresented students, and feature a wide range of experiences. It requires seeing students as whole people with complex lives, not just data points in an attendance algorithm.

    The choice ahead

    As we head into another school year, we face a choice. We can continue chasing the shiny startups, building ever more sophisticated systems to track and predict student disengagement. Or we can remember that attendance is ultimately about whether a young person feels connected to something meaningful at school.

    The most effective districts aren’t choosing between high-tech and high-touch–they’re using technology to enable more meaningful personal connections. They’re using AI to identify students who need support, then deploying caring adults to provide it. They’re automating the logistics so teachers can focus on relationships.

    That ISTE administrator was right to be grateful for a non-AI solution. Because while artificial intelligence can optimize many things, it can’t replace the fundamental human need to belong, to feel seen, and to believe that showing up matters.

    The solution to chronic absenteeism is in our relationships, not our servers. It’s time we started measuring and investing in both.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • In training educators to use AI, we must not outsource the foundational work of teaching

    In training educators to use AI, we must not outsource the foundational work of teaching

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    I was conferencing with a group of students when I heard the excitement building across my third grade classroom. A boy at the back table had been working on his catapult project for over an hour through our science lesson, into recess, and now during personalized learning time. I watched him adjust the wooden arm for what felt like the 20th time, measure another launch distance, and scribble numbers on his increasingly messy data sheet.

    “The longer arm launches farther!” he announced to no one in particular, his voice carrying the matter-of-fact tone of someone who had just uncovered a truth about the universe. I felt that familiar teacher thrill, not because I had successfully delivered a physics lesson, but because I hadn’t taught him anything at all.

    Last year, all of my students chose a topic they wanted to explore and pursued a personal learning project about it. This particular student had discovered the relationship between lever arm length and projectile distance entirely through his own experiments, which involved mathematics, physics, history, and data visualization.

    Other students drifted over to try his longer-armed design, and soon, a cluster of 8-year-olds were debating trajectory angles and comparing medieval siege engines to ancient Chinese catapults.

    They were doing exactly what I dream of as an educator: learning because they wanted to know, not because they had to perform.

    Then, just recently, I read about the American Federation of Teachers’ new $23 million partnership with Microsoft, OpenAI, and Anthropic to train educators how to use AI “wisely, safely and ethically.” The training sessions would teach them how to generate lesson plans and “microwave” routine communications with artificial intelligence.

    My heart sank.

    As an elementary teacher who also conducts independent research on the intersection of AI and education, and writes the ‘Algorithmic Mind’ column about it for Psychology Today, I live in the uncomfortable space between what technology promises and what children actually need. Yes, I use AI, but only for administrative work like drafting parent newsletters, organizing student data, and filling out required curriculum planning documents. It saves me hours on repetitive tasks that have nothing to do with teaching.

    I’m all for showing educators how to use AI to cut down on rote work. But I fear the AFT’s $23 million initiative isn’t about administrative efficiency. According to their press release, they’re training teachers to use AI for “instructional planning” and as a “thought partner” for teaching decisions. One featured teacher describes using AI tools to help her communicate “in the right voice” when she’s burned out. Another says AI can assist with “late-night lesson planning.”

    That sounds more like outsourcing the foundational work of teaching.

    Watching my student discover physics principles through intrinsic curiosity reminded me why this matters so much. When we start relying on AI to plan our lessons and find our teaching voice, we’re replacing human judgment with algorithmic thinking at the very moment students need us most. We’re prioritizing the product of teaching over the process of learning.

    Most teachers I talk to share similar concerns about AI. They focus on cheating and plagiarism. They worry about students outsourcing their thinking and how to assess learning when they can’t tell if students actually understand anything. The uncomfortable truth is that students have always found ways to avoid genuine thinking when we value products over process. I used SparkNotes. Others used Google. Now, students use ChatGPT.

    The problem is not technology; it’s that we continue prioritizing finished products over messy learning processes. And as long as education rewards predetermined answers over curiosity, students will find shortcuts.

    That’s why teachers need professional development that moves in the opposite direction. They need PD that helps them facilitate genuine inquiry and human connection; foster classrooms where confusion is valued as a precursor to understanding; and develop in students an intrinsic motivation.

    When I think about that boy measuring launch distances with handmade tools, I realize he was demonstrating the distinctly human capacity to ask questions that only he wanted to address. He didn’t need me to structure his investigation or discovery. He needed the freedom to explore, materials to experiment with, and time to pursue his curiosity wherever it led.

    The learning happened not because I efficiently delivered content, but because I stepped back and trusted his natural drive to understand.

    Children don’t need teachers who can generate lesson plans faster or give AI-generated feedback, but educators who can inspire questions, model intellectual courage, and create communities where wonder thrives and real-world problems are solved.

    The future belongs to those who can combine computational tools with human wisdom, ethics, and creativity. But this requires us to maintain the cognitive independence to guide AI systems rather than becoming dependent on them.

    Every time I watch my students make unexpected connections, I’m reminded that the most important learning happens in the spaces between subjects, in the questions that emerge from genuine curiosity, in the collaborative thinking that builds knowledge through relationships. We can’t microwave that. And we shouldn’t try.

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on AI in education, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • KU researchers publish guidelines to help responsibly implement AI in education

    KU researchers publish guidelines to help responsibly implement AI in education

    This story originally appeared on KU News and is republished with permission.

    Key points:

    Researchers at the University of Kansas have produced a set of guidelines to help educators from preschool through higher education responsibly implement artificial intelligence in a way that empowers teachers, parents, students and communities alike.

    The Center for Innovation, Design & Digital Learning at KU has published “Framework for Responsible AI Integration in PreK-20 Education: Empowering All Learners and Educators with AI-Ready Solutions.” The document, developed under a cooperative agreement with the U.S. Department of Education, is intended to provide guidance on how schools can incorporate AI into its daily operations and curriculum.

    Earlier this year, President Donald Trump issued an executive order instructing schools to incorporate AI into their operations. The framework is intended to help all schools and educational facilities do so in a manner that fits their unique communities and missions.

    “We see this framework as a foundation,” said James Basham, director of CIDDL and professor of special education at KU. “As schools consider forming an AI task force, for example, they’ll likely have questions on how to do that, or how to conduct an audit and risk analysis. The framework can help guide them through that, and we’ll continue to build on this.”

    The framework features four primary recommendations.

    • Establish a stable, human-centered foundation.
    • Implement future-focused strategic planning for AI integration.
    • Ensure AI educational opportunities for every student.
    • Conduct ongoing evaluation, professional learning and community development.

    First, the framework urges schools to keep humans at the forefront of AI plans, prioritizing educator judgment, student relationships and family input on AI-enabled processes and not relying on automation for decisions that affect people. Transparency is also key, and schools should communicate how AI tools work, how decisions are made and ensure compliance with student protection laws such as the Individuals with Disabilities Education Act and Family Education Rights and Privacy Act, the report authors write.

    The document also outlines recommendations for how educational facilities can implement the technology. Establishing an AI integration task force including educators, administrators, families, legal advisers and specialists in instructional technology and special education is key among the recommendations. The document also shares tips on how to conduct an audit and risk analysis before adoption and consider how tools can affect student placement and identification and consider possible algorithmic error patterns. As the technologies are trained on human data, they run the risk of making the same mistakes and repeating biases humans have made, Basham said.

    That idea is also reflected in the framework’s third recommendation. The document encourages educators to commit to learner-centered AI implementation that considers all students, from those in gifted programs to students with cognitive disabilities. AI tools should be prohibited from making final decisions on IEP eligibility, disciplinary actions and student progress decisions, and mechanisms should be installed that allow for feedback on students, teachers and parents’ AI educational experiences, the authors wrote.

    Finally, the framework urges ongoing evaluation, professional learning and community development. As the technology evolves, schools should regularly re-evaluate it for unintended consequences and feedback from those who use it. Training both at implementation and in ongoing installments will be necessary to address overuse or misuse and clarify who is responsible for monitoring AI use and to ensure both the school and community are informed on the technology.

    The framework was written by Basham; Trey Vasquez, co-principal investigator at CIDDL, operating officer at KU’s Achievement & Assessment Institute and professor of special education at KU; and Angelica Fulchini Scruggs, research associate and operations director for CIDDL.

    Educators interested in learning more about the framework or use of AI in education are invited to connect with CIDDL. The center’s site includes data on emergent themes in AI guidance at the state level and information on how it supports educational technology in K-12 and higher education. As artificial intelligence finds new uses and educators are expected to implement the technology in schools, the center’s researchers said they plan to continue helping educators implement it in ways that benefit schools, students of all abilities and communities.

    “The priority at CIDDL is to share transparent resources for educators on topics that are trending and in a way that is easy to digest,” Fulchini Scruggs said. “We want people to join the community and help them know where to start. We also know this will evolve and change, and we want to help educators stay up to date with those changes to use AI responsibly in their schools.”

    Latest posts by eSchool Media Contributors (see all)

    Source link