Category: Featured on eSchool News

  • 6 steps to transforming parent engagement, one message at a time

    6 steps to transforming parent engagement, one message at a time

    Key points:

    When you open the doors to a brand-new school, you’re not just filling classrooms, you’re building a community from the ground up. In August 2023, I opened our Pre-K through 4th grade school in Charlotte, North Carolina, to alleviate overcrowding at several East Charlotte campuses. As the founding principal, I knew that fostering trust and engagement with families was as essential as hiring great teachers or setting academic goals.

    Many of our students were transitioning from nearby schools, and their families were navigating uncertainty and change. My top priority was to create a strong home-school connection from the very beginning–one rooted in transparency, inclusivity, and consistent communication, where every parent feels like a valued partner in our new school’s success. Since then, we’ve added 5th grade and continue to grow our enrollment as we shape the identity of our school community.

    Up until two years ago, our district was primarily using a legacy platform for our school-to-home communication. It was incredibly limiting, and I didn’t like using it. The district then switched to a new solution, which helped us easily reach out to families (whose children were enrolling at the new elementary school) with real-time alerts and two-way messaging.

    The difference between the two systems was immediately obvious and proved to be a natural transition for me. This allowed us to take a direct, systematic, and friendlier approach to our school-home communications as we implemented the new system.

    Building strong home-school bonds

    Here are the steps we took to ensure a smooth adoption process, and some of the primary ways we use the platform:

    1. Get everyone on board from the start. We used comprehensive outreach with families through flyers, posters, and dedicated communication at open-house events. At the same time, our teachers were easily rostered–a process simplified by a seamless integration with our student information system–and received the necessary training on the platform.
    1. Introduce the new technology as a “familiar tool.” We framed our ParentSquare tool as a “closed social media network” for school-home communication. This eased user adoption and demystified the technology by connecting it to existing social habits. Our staff emphasized that if users could communicate socially online, they could also easily use the platform for school-related interactions.
    1. Promote equity with automatic translation. With a student population that’s about 50 percent Hispanic and with roughly 22 different languages represented across the board, we were very interested in our new platform’s automatic translation capabilities (which currently span more than 190 languages). Having this process automated has vastly reduced the amount of time and number of headaches involved with creating and sharing newsletters and other materials with parents.
    1. Streamline tasks and reduce waste. I encourage staff to create their newsletters in the communications platform versus reverting to PDFs, paper, or other formats for information-sharing. That way, the platform can manage the automatic translation and promote effective engagement with families. This is an equity issue that we have to continue working on both in our school and our district as a whole. It’s about making sure that all parents have access to the same information regardless of their native language.
    1. Centralize proof of delivery. We really like having the communication delivery statistics, which staff can use to confirm message receipt–a crucial feature when parents claim they didn’t receive information. The platform shows when a message was received, providing clear confirmation that traditional paper handouts can’t match. Having one place where all of those communications can be sent, seen, and delivered is extremely helpful.
    1. Manage events and boost engagement. The platform keeps us organized, and we especially like the calendar and post functions (and use both a lot). Being able to sort specific groups is great. We use that feature to plan events like staggered kindergarten entry and separate open houses; it helps us target communications precisely. For a recent fifth-grade promotion ceremony, for example, we managed RSVPs and volunteer sign-ups directly through the communications platform, rather than using an external tool like Sign-Up Genius. 

    Modernizing school-family outreach

    We always want to make it easy for families to receive, consume, and respond to our messages, and our new communications platform helps us achieve that goal. Parents appreciate receiving notifications via email, app, voice, or text–a method we use a lot for sending out reminders. 

    This direct communication is particularly impactful given our diverse student population, with families speaking many different languages. Teachers no longer need third-party translation sites or manual cut-and-paste methods because the platform handles automatic translation seamlessly. It’s helped us foster deeper family engagement and bridge communication gaps we otherwise couldn’t–it’s really amazing to see.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • A practical guide for sourcing edtech

    A practical guide for sourcing edtech

    Key points:

    Virtual reality field trips now enable students to explore the Great Wall of China, the International Space Station, and ancient Rome without leaving the classroom.  Gamified online learning platforms can turn lessons into interactive challenges that boost engagement and motivation. Generative AI tutors are providing real-time feedback on writing and math assignments, helping students sharpen their skills with personalized support in minutes.

    Education technology is accelerating at a rapid pace–and teachers are eager to bring these digital tools to the classroom. But with pandemic relief funds running out, districts are having to make tougher decisions around what edtech they can afford, which vendors will offer the greatest value, and, crucially, which tools come with robust cybersecurity protections.

    Although educators are excited to innovate, school leaders must weigh every new app or online platform against cybersecurity risks and the responsibility of protecting student data. Unfortunately, those risks remain very real: 6 in 10 K-12 schools were targeted by ransomware in 2024.

    Cybersecurity is harder for some districts than others

    The reality is that school districts widely vary when it comes to their internal resources, cybersecurity expertise, and digital maturity.

    A massive urban system may have a dedicated legal department, CISO, and rigid procurement processes. In a small rural district, the IT lead might also coach soccer or direct the school play.

    These discrepancies leave wide gaps that can be exploited by security threats. Districts are often improvising vetting processes that vary wildly in rigor, and even the best-prepared system struggles to know what “good enough” looks like as technology tools rapidly accelerate and threats evolve just as fast.

    Whether it’s apps for math enrichment, platforms for grading, or new generative AI tools that promise differentiated learning at scale, educators are using more technology than ever. And while these digital tools are bringing immense benefits to the classroom, they also bring more threat exposure. Every new tool is another addition to the attack surface, and most school districts are struggling to keep up.

    Districts are now facing these critical challenges with even fewer resources. With the U.S. Department of Education closing its Office of EdTech, schools have lost a vital guidepost for evaluating technology tools safely. That means less clarity and support, even as the influx of new tech tools is at an all-time high.

    But innovation and protection don’t have to be in conflict. Schools can move forward with digital tools while still making smart, secure choices. Their decision-making can be supported by some simple best practices to help guide the way.

    5 green flags for evaluating technology tools

    New School Safety Resources

    With so many tools entering classrooms, knowing how to assess their safety and reliability is essential. But what does safe and trustworthy edtech actually look like?

    You don’t need legal credentials or a cybersecurity certification to answer that question. You simply need to know what to look for–and what questions to ask. Here are five green flags that can guide your decisions and boost confidence in the tools you bring into your classrooms.

    1. Clear and transparent privacy policies

    A strong privacy policy should be more than a formality; it should serve as a clear window into how a tool handles data. The best ones lay out exactly what information is collected, why it’s needed, how it’s used, and who it’s shared with, in plain, straightforward language.

    You shouldn’t need legal training to make sense of it. Look for policies that avoid vague, catch-all phrases and instead offer specific details, like a list of subprocessors, third-party services involved, or direct contact information for the vendor’s privacy officer. If you can’t quickly understand how student data is being handled, or if the vendor seems evasive when you ask, that’s cause for concern.

    1. Separation between student and adult data

    Student data is highly personal, extremely sensitive, and must be treated with extra care. Strong vendors explicitly separate student data from educator, administrator, and parent data in their systems, policies, and user experiences.

    Ask how student data is accessed internally and what safeguards are in place. Does the vendor have different privacy policies for students versus adults? If they’ve engineered that distinction into their platform, it’s a sign they’ve thought deeply about your responsibilities under FERPA and COPPA.

    1. Third-party audits and certifications

    Trust, but verify. Look for tools that have been independently evaluated through certifications like the Common Sense Privacy Seal, iKeepSafe, or the 1EdTech Trusted App program. These external audits validate that privacy claims and company practices are tested against meaningful standards and backed up by third-party validation.

    Alignment with broader security frameworks like NIST Cybersecurity Framework (CSF), ISO 27001, or SOC 2 can add another layer of assurance, especially in states where district policies lean heavily on these benchmarks. These technical frameworks should complement radical transparency. The most trustworthy vendors combine certification with transparency: They’ll show you exactly what they collect, how they store it, and how they protect it. That openness–and a willingness to be held accountable–is the real marker of a privacy-first partner.

    1. Long-term commitment to security and privacy

    Cybersecurity shouldn’t be a one-and-done checklist. It’s a continual practice. Ask vendors how they approach ongoing risks: Do they conduct regular penetration testing? Is a formal incident response plan in place? How are teams trained on phishing threats and secure coding?

    If they follow a framework like the NIST CSF, that’s great. But also dig into how they apply it: What’s their track record for patching vulnerabilities or communicating breaches? A real commitment shows up in action, not just alignment.

    1. Data minimization and purpose limitations

    Trustworthy technology tools collect only what’s essential–and vendors can explain why they need it. If you ask, “Why do you collect this data point?” they should have a direct answer that ties back to functionality, not future marketing.

    Look for platforms that commit to never repurposing student data for behavioral ad targeting. Also, ask about deletion protocols: Can data be purged quickly and completely if requested? If not, it’s time to ask why.

    Laying the groundwork for a safer school year

    Cybersecurity doesn’t require a 10-person IT team or a massive budget. Every district, no matter the size, can take meaningful, manageable steps to reduce risk, establish guardrails, and build trust.

    Simple, actionable steps go a long way: Choose tools that are transparent about data use, use trusted frameworks and certifications as guideposts, and make cybersecurity training a regular part of staff development. Even small efforts , like a five-minute refresher on phishing during back-to-school sessions, can have an outsized impact on your district’s overall security posture.

    For schools operating without deep resources or internal expertise, this work is especially urgent–and entirely possible. It just requires knowing where to start.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What we lose when AI replaces teachers

    What we lose when AI replaces teachers

    Key points:

    A colleague of ours recently attended an AI training where the opening slide featured a list of all the ways AI can revolutionize our classrooms. Grading was listed at the top. Sure, AI can grade papers in mere seconds, but should it?

    As one of our students, Jane, stated: “It has a rubric and can quantify it. It has benchmarks. But that is not what actually goes into writing.” Our students recognize that AI cannot replace the empathy and deep understanding that recognizes the growth, effort, and development of their voice. What concerns us most about grading our students’ written work with AI is the transformation of their audience from human to robot.

    If we teach our students throughout their writing lives that what the grading robot says matters most, then we are teaching them that their audience doesn’t matter. As Wyatt, another student, put it: “If you can use AI to grade me, I can use AI to write.” NCTE, in its position statements for Generative AI, reminds us that writing is a human act, not a mechanical one. Reducing it to automated scores undermines its value and teaches students, like Wyatt and Jane, that the only time we write is for a grade. That is a future of teaching writing we hope to never see.

    We need to pause when tech companies tout AI as the grader of student writing. This isn’t a question of capability. AI can score essays. It can be calibrated to rubrics. It can, as Jane

    said, provide students with encouragement and feedback specific to their developing skills. And we have no doubt it has the potential to make a teacher’s grading life easier. But just because we can outsource some educational functions to technology doesn’t mean we should.

    It is bad enough how many students already see their teacher as their only audience. Or worse, when students are writing for teachers who see their written work strictly through the lens of a rubric, their audience is limited to the rubric. Even those options are better than writing for a bot. Instead, let’s question how often our students write to a broader audience of their peers, parents, community, or a panel of judges for a writing contest. We need to reengage with writing as a process and implement AI as a guide or aide rather than a judge with the last word on an essay score.

    Our best foot forward is to put AI in its place. The use of AI in the writing process is better served in the developing stages of writing. AI is excellent as a guide for brainstorming. It can help in a variety of ways when a student is struggling and looking for five alternatives to their current ending or an idea for a metaphor. And if you or your students like AI’s grading feature, they can paste their work into a bot for feedback prior to handing it in as a final draft.

    We need to recognize that there are grave consequences if we let a bot do all the grading. As teachers, we should recognize bot grading for what it is: automated education. We can and should leave the promises of hundreds of essays graded in an hour for the standardized test providers. Our classrooms are alive with people who have stories to tell, arguments to make, and research to conduct. We see our students beyond the raw data of their work. We recognize that the poem our student has written for their sick grandparent might be a little flawed, but it matters a whole lot to the person writing it and to the person they are writing it for. We see the excitement or determination in our students’ eyes when they’ve chosen a research topic that is important to them. They want their cause to be known and understood by others, not processed and graded by a bot.

    The adoption of AI into education should be conducted with caution. Many educators are experimenting with using AI tools in thoughtful and student-centered ways. In a recent article, David Cutler describes his experience using an AI-assisted platform to provide feedback on his students’ essays. While Cutler found the tool surprisingly accurate and helpful, the true value lies in the feedback being used as part of the revision process. As this article reinforces, the role of a teacher is not just to grade, but to support and guide learning. When used intentionally (and we emphasize, as in-process feedback) AI can enhance that learning, but the final word, and the relationship behind it, must still come from a human being.

    When we hand over grading to AI, we risk handing over something much bigger–our students’ belief that their words matter and deserve an audience. Our students don’t write to impress a rubric, they write to be heard. And when we replace the reader with a robot, we risk teaching our students that their voices only matter to the machine. We need to let AI support the writing process, not define the product. Let it offer ideas, not deliver grades. When we use it at the right moments and for the right reasons, it can make us better teachers and help our students grow. But let’s never confuse efficiency with empathy. Or algorithms with understanding.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • The move from principal to district leader was fraught–here’s what I missed the most

    The move from principal to district leader was fraught–here’s what I missed the most

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    I didn’t expect to grieve.

    I knew taking a central office role meant trading the school building for a district badge. I knew the days would be filled with policy, meetings, and personnel issues. What I didn’t know was how much I would miss morning announcements, front office chatter, and the small but sacred chaos of classroom life.

    When I accepted my central office role at Knox County Schools nearly three years ago, I heard words of congratulations and encouragement, and a lot of “You’ll be great at this.” What I didn’t hear was, “You’re going to miss the cafeteria noise” or “You’ll feel phantom pain for your walkie and reach for it like it’s still there.” No one warned me I’d find myself lingering too long during school visits, trying to feel like I still belong.

    What I lost wasn’t just proximity; it was identity.

    As a principal, I was part of everything. Students shouted greetings across the parking lot. Parents stopped me in the grocery store to ask about bus routes or share weekend news. Teachers popped into my office with questions or just to drop off a piece of cake from the lounge. I wasn’t above the work. I was in it. I was woven into the messy, beautiful rhythm of a school day.

    Shifting to the central office changed not just the pace of my day, but the feel of the work. The space was quieter, the communication more deliberate. There are no morning announcements. No car rider line and morning high-fives from kids. No spontaneous TikTok dances during class change. I moved from the rhythm of a living, breathing school to a place where school leadership feels more technical, more filtered, and more removed.

    The relationships changed, too. As a principal, you’re not just part of a team; you’re a part of a family. You laugh together, carry each other’s burdens, and share both the stress and the wins. Move into a district role, and you’re now “from downtown,” even if your heart still lives on campus. You walk into buildings with a badge that means something different, and the conversations shift just enough for you to notice.

    None of this means the central office work doesn’t matter. It does. Or that I don’t love it. I do. Central office work gives me a systems-level view of how our schools function. I find purpose in improving not just individual outcomes, but the structures that guide them.

    Still, the change in relational gravity caught me off guard. And once the initial disorientation passed, it left me with a deeper concern: How will I stay connected to how the work is actually experienced and carried out in schools if I’m no longer living in it each day?

    At first, I told myself it was just a learning curve, that it would pass, that I’d find new rhythms soon enough. And I did — but not before realizing that central office leadership requires a different kind of muscle. One I hadn’t needed before.

    As a principal, I lived in fast feedback loops. I saw the effects of my decisions by lunchtime. I knew which teachers were having a hard week, which student needed extra eyes, which parent was about to call. Even hard conversations came with a certain clarity because I was close to the context and knew the culture I wanted to build.

    At the district level, the impact is broader but harder to track. The wins take longer to see. The feedback is quieter.

    I had to become more intentional about noticing what I could no longer see. That meant listening differently during school visits, paying closer attention to what leaders were navigating, and asking better questions. Not just about what was happening, but what it was costing them to make it happen.

    One of the advantages of working at a systems level is being able to recognize patterns across multiple settings. They can reveal root causes that individual concerns might never expose. That clarity opens the door to more aligned, lasting support.

    I began thinking less about whether expectations were clear and more about whether they were sustainable. My role was not to direct the work but to support the people carrying it out.

    These changes didn’t come naturally. They came because I didn’t want to become a leader who made good decisions in theory but stayed out of touch in practice. I didn’t want to lead by spreadsheet, even though color-coded tabs bring me great joy. I wanted to lead by understanding.

    Eventually, I began to see that even though I was no longer in the thick of the school day, I could still choose to stay connected — to show up, to ask real questions, to build trust not just through policy, but through presence.

    The classroom educators and school leaders I supported didn’t need someone who had knowledge of what it was like to be a teacher or principal. They needed someone who remembered what it felt like to be one. Someone who hadn’t forgotten the rush of the morning bell or the weight of a tough parent meeting or the impossible feeling of juggling school culture, teacher evaluations, instructional priorities, and a leaky roof all before noon.

    I think back often to my first year in central office. The silence. The absence of bells and kids and chaos. The invisible weight of missing something no one warned me I would lose. I remember walking through a school one afternoon and instinctively reaching for my walkie talkie. It wasn’t there. Of course it wasn’t there. But the reflex reminded me of something important: I still wanted to be tuned in.

    Leadership doesn’t have to grow lonelier as it grows broader. But staying connected takes intention. It takes habits, not just memories.

    I didn’t expect to grieve. But I’m grateful I did. Because grief has a way of reminding you what still deserves your presence.

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on district management, visit eSN’s Educational Leadership hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Key points:

    In 2025, schools are sitting on more data than ever before. Student records, attendance, health information, behavioral logs, and digital footprints generated by edtech tools have turned K-12 institutions into data-rich environments. As artificial intelligence becomes a central part of the learning experience, these data streams are being processed in increasingly complex ways. But with this complexity comes a critical question: Are schools doing enough to protect that data?

    The answer, in many cases, is no.

    The rise of shadow AI

    According to CoSN’s May 2025 State of EdTech District Leadership report, a significant portion of districts, specifically 43 percent, lack formal policies or guidance for AI use. While 80 percent of districts have generative AI initiatives underway, this policy gap is a major concern. At the same time, Common Sense Media’s Teens, Trust and Technology in the Age of AI highlights that many teens have been misled by fake content and struggle to discern truth from misinformation, underscoring the broad adoption and potential risks of generative AI.

    This lack of visibility and control has led to the rise of what many experts call “shadow AI”: unapproved apps and browser extensions that process student inputs, store them indefinitely, or reuse them to train commercial models. These tools are often free, widely adopted, and nearly invisible to IT teams. Shadow AI expands the district’s digital footprint in ways that often escape policy enforcement, opening the door to data leakage and compliance violations. CoSN’s 2025 report specifically notes that “free tools that are downloaded in an ad hoc manner put district data at risk.”

    Data protection: The first pillar under pressure

    The U.S. Department of Education’s AI Toolkit for Schools urges districts to treat student data with the same care as medical or financial records. However, many AI tools used in classrooms today are not inherently FERPA-compliant and do not always disclose where or how student data is stored. Teachers experimenting with AI-generated lesson plans or feedback may unknowingly input student work into platforms that retain or share that data. In the absence of vendor transparency, there is no way to verify how long data is stored, whether it is shared with third parties, or how it might be reused. FERPA requires that if third-party vendors handle student data on behalf of the institution, they must comply with FERPA. This includes ensuring data is not used for unintended purposes or retained for AI training.

    Some tools, marketed as “free classroom assistants,” require login credentials tied to student emails or learning platforms. This creates additional risks if authentication mechanisms are not protected or monitored. Even widely-used generative tools may include language in their privacy policies allowing them to use uploaded content for system training or performance optimization.

     

    Data processing and the consent gap

    Generative AI models are trained on large datasets, and many free tools continue learning from user prompts. If a student pastes an essay or a teacher includes student identifiers in a prompt, that information could enter a commercial model’s training loop. This creates a scenario where data is being processed without explicit consent, potentially in violation of COPPA (Children’s Online Privacy Protection Act) and FERPA. While the FTC’s December 2023 update to the COPPA Rule did not codify school consent provisions, existing guidance still allows schools to consent to technology use on behalf of parents in educational contexts. However, the onus remains on schools to understand and manage these consent implications, especially with the rule’s new amendments becoming effective June 21, 2025, which strengthen protections and require separate parental consent for third-party disclosures for targeted advertising.

    Moreover, many educators and students are unaware of what constitutes “personally identifiable information” (PII) in these contexts. A name combined with a school ID number, disability status, or even a writing sample could easily identify a student, especially in small districts. Without proper training, well-intentioned AI use can cross legal lines unknowingly.

    Cybersecurity risks multiply

    AI tools have also increased the attack surface of K-12 networks. According to ThreatDown’s 2024 State of Ransomware in Education report, ransomware attacks on K-12 schools increased by 92 percent between 2022 and 2023, with 98 total attacks in 2023. This trend is projected to continue as cybercriminals use AI to create more targeted phishing campaigns and detect system vulnerabilities faster. AI-assisted attacks can mimic human language and tone, making them harder to detect. Some attackers now use large language models to craft personalized emails that appear to come from school administrators.

    Many schools lack endpoint protection for student devices, and third-party integrations often bypass internal firewalls. Free AI browser extensions may collect keystrokes or enable unauthorized access to browser sessions. The more tools that are introduced without IT oversight, the harder it becomes to isolate and contain incidents when they occur. CoSN’s 2025 report indicates that 60 percent of edtech leaders are “very concerned about AI-enabled cyberattacks,” yet 61 percent still rely on general funds for cybersecurity efforts, not dedicated funding.

    Building a responsible framework

    To mitigate these risks, school leaders need to:

    • Audit tool usage using platforms like Lightspeed Digital Insight to identify AI tools being accessed without approval. Districts should maintain a living inventory of all digital tools. Lightspeed Digital Insight, for example, is vetted by 1EdTech for data privacy.
    • Develop and publish AI use policies that clarify acceptable practices, define data handling expectations, and outline consequences for misuse. Policies should distinguish between tools approved for instructional use and those requiring further evaluation.
    • Train educators and students to understand how AI tools collect and process data, how to interpret AI outputs critically, and how to avoid inputting sensitive information. AI literacy should be embedded in digital citizenship curricula, with resources available from organizations like Common Sense Media and aiEDU.
    • Vet all third-party apps through standards like the 1EdTech TrustEd Apps program. Contracts should specify data deletion timelines and limit secondary data use. The TrustEd Apps program has vetted over 12,000 products, providing a valuable resource for districts.
    • Simulate phishing attacks and test breach response protocols regularly. Cybersecurity training should be required for staff, and recovery plans must be reviewed annually.

    Trust starts with transparency

    In the rush to embrace AI, schools must not lose sight of their responsibility to protect students’ data and privacy. Transparency with parents, clarity for educators, and secure digital infrastructure are not optional. They are the baseline for trust in the age of algorithmic learning.

    AI can support personalized learning, but only if we put safety and privacy first. The time to act is now. Districts that move early to build policies, offer training, and coordinate oversight will be better prepared to lead AI adoption with confidence and care.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Digital learning in a new age

    Digital learning in a new age

    Key points:

    Digital learning–in the form of online, hybrid, and blended schools and courses–is growing steadily in U.S. schools. These learning options can transform education because they allow for learning, teaching, and student engagement outside the confines of traditional physical schools.

    Students no longer have to show up at a school building every morning, and millions of students and families are demonstrating their preference for more flexible learning options by choosing their district’s online schools, charter schools, and private schools.

    Digital learning meets the needs of today’s students, who are seeking flexibility in their scheduling. Many high school students want to pursue sports, arts, and career interests in the form os jobs, internships, and other program. Others simply crave the control an innovative school gives them over the time, place, and pace at which they learn. Digital learning also meets the needs of teachers, who, just like knowledge workers around the world, are interested in employment that allows them to choose their schedules.

    Online and hybrid learning is becoming easier to implement as technology grows and improves. Unlike just a few years ago, when teachers were concerned about using multiple technology tools, much-improved integration and interoperability between platforms is making adoption of multiple tools far easier.

    While relatively few students and families prefer their education to be 100 percent online, many students are selecting hybrid options that combine online and face-to-face interactions. Much like young knowledge workers who are increasingly blending home offices with corporate headquarters, digital learning is showing up in unexpected places as well. Let’s take a closer look at two examples: career and technical education (CTE) and physical education (PE).

    CTE is often perceived as being “hands on” in ways that casual observers might expect would not align well with digital learning–but the truth is exactly the opposite.

    Digital learning is broadening the world of CTE for students. Online and hybrid schools provide CTE programs by offering a combination of online career courses and by partnering with businesses, state and regional training centers, and other
    organizations to combine online learning with on-the-ground, real-world jobs, internships, and learning opportunities.

    Hybrid schools and programs, including those run by mainstream districts, provide academic scheduling flexibility to students who seek to prioritize their time in jobs, internships, or career training. No longer do these students have to fit in their career interests after regular school hours or on weekends–when many companies and high-value jobs are not open or available.

    For example, a student interested in a veterinary career can work at a vet’s office during the regular week and school hours, completing some of their online coursework after normal work hours.

    Virtual Arkansas, a state-supported course provider supporting districts across Arkansas, has made digital CTE a central element of its offerings.

    “CTE is a key part of our value to students and schools across Arkansas. Students, teachers, counselors, and the business community, all appreciate that we are providing flexible options for students to gain real-world expertise and experience via our online and hybrid programs,” said John Ashworth, the programs’ executive director.

    Perhaps even more surprising than CTE shifting to digital is the idea that next generation physical education is based on online tools, adept teachers, and student voice and choice.

    Today’s students are accustomed to going into a coffee shop and ordering their drink with a dozen customized features. And yet, in traditional PE classes, we expect students to all want to learn the same sport, activity, or exercise, at the same time and pace. That’s how too many traditional gym classes operate–based on the factory model of education in which all students do the same thing at the same time.

    There’s a better way, which is being embraced by online schools, hybrid schools, and traditional districts. Online and hybrid PE classes shift exercise, activity, and wellness to match student interests and timing. A student chooses from hundreds of detailed instructional videos in dozens of categories, from aquatics to basketball to yoga, trains using the videos combined with instruction provided by a teacher, and tracks her progress.

    This doesn’t sound like a traditional gym class; instead, it mimics the ways that young adults are active in gyms, yoga studios, and sports leagues all around the country. Consider fitness clubs from the local YMCA to the most high-end club–they are all offering a wide variety of classes, on varied schedules to fit busy lifestyles, and at different levels of expertise. No school can match this, of course, by the traditional approach to gym class. But Joe Titus, founder and CEO of Hiveclass, which offers online physical education courses, points out that student agency to
    choose from a wide variety of PE options is possible–when schools are ready to make the leap.

    Online schools and district programs are already doing so, with fantastic outcomes as students lean into their choices and options. As futurist William Gibson said decades ago, “the future is here, it’s just not evenly distributed.”

    Online and hybrid CTE, physical education, and other options prove the point. The next step is to make these options widely available to all the students who are seeking a better alternative.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • National AI training hub for educators to open, funded by OpenAI and Microsoft

    National AI training hub for educators to open, funded by OpenAI and Microsoft

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.

    More than 400,000 K-12 educators across the country will get free training in AI through a $23 million partnership between a major teachers union and leading tech companies that is designed to close gaps in the use of technology and provide a national model for AI-integrated curriculum.

    The new National Academy for AI Instruction will be based in the downtown Manhattan headquarters of the United Federation of Teachers, the New York City affiliate of the American Federation of Teachers, and provide workshops, online courses, and hands-on training sessions. This hub-based model of teacher training was inspired by work of unions like the United Brotherhood of Carpenters that have created similar training centers with industry partners, according to AFT President Randi Weingarten.

    “Teachers are facing huge challenges, which include navigating AI wisely, ethically and safely,” Weingarten said at a press conference Tuesday announcing the initiative. “The question was whether we would be chasing it or whether we would be trying to harness it.”

    The initiative involves the AFT, UFT, OpenAI, Microsoft, and Anthropic.

    The Trump administration has encouraged AI integration in the classroom. More than 50 companies have signed onto a White House pledge to provide grants, education materials, and technology to invest in AI education.

    In the wake of federal funding cuts to public education and the impact of Trump’s sweeping tax and policy bill on schools, Weingarten sees this partnership with private tech companies as a crucial investment in teacher preparation.

    “We are actually ensuring that kids have, that teachers have, what they need to deal with the economy of today and tomorrow,” Weingarten said.

    The academy will be based in a city where the school system initially banned the use of AI in the classroom, claiming it would interfere with the development of critical thinking skills. A few months later, then-New York City schools Chancellor David Banks did an about-face, pledging to help schools smartly incorporate the technology. He said New York City schools would embrace the potential of AI to drive individualized learning. But concrete plans have been limited.

    The AFT, meanwhile, has tried to position itself as a leader in the field. Last year, the union released its own guidelines for AI use in the classroom and funded pilot programs around the country.

    Vincent Plato, New York City Public Schools K-8 educator and UFT Teacher Center director, said the advent of AI reminds him of when teachers first started using word processors.

    “We are watching educators transform the way people use technology for work in real time, but with AI it’s on another unbelievable level because it’s just so much more powerful,” he said in a press release announcing the new partnership. “It can be a thought partner when they’re working by themselves, whether that’s late-night lesson planning, looking at student data or filing any types of reports — a tool that’s going to be transformative for teachers and students alike.”

    Teachers who frequently use AI tools report saving 5.9 hours a week, according to a national survey conducted by the Walton Family Foundation in cooperation with Gallup. These tools are most likely to be used to support instructional planning, such as creating worksheets or modifying material to meet students’ needs. Half of the teachers surveyed stated that they believe AI will reduce teacher workloads.

    “Teachers are not only gaining back valuable time, they are also reporting that AI is helping to strengthen the quality of their work,” Stephanie Marken, senior partner for U.S. research at Gallup, said in a press release. “However, a clear gap in AI adoption remains. Schools need to provide the tools, training, and support to make effective AI use possible for every teacher.”

    While nearly half of school districts surveyed by the research corporation RAND have reported training teachers in utilizing AI-powered tools by fall 2024, high-poverty districts are still lagging behind their low poverty counterparts. District leaders across the nation report a scarcity of external experts and resources to provide quality AI training to teachers.

    OpenAI, a founding partner of the National Academy for AI Instruction, will contribute $10 million over the next five years. The tech company will provide educators and course developers with technical support to integrate AI into classrooms as well as software applications to build custom, classroom-specific tools.

    Tech companies would benefit from this partnership by “co-creating” and improving their products based on feedback and insights from educators, said Gerry Petrella, Microsoft general manager, U.S. public policy, who hopes the initiative will align the needs of educators with the work of developers.

    In a sense, the teachers are training AI products just as much as they are being trained, according to Kathleen Day, a lecturer at Johns Hopkins Carey Business School. Day emphasized that through this partnership, AI companies would gain access to constant input from educators so they could continually strengthen their models and products.

    “Who’s training who?” Day said. “They’re basically saying, we’ll show you how this technology works, and you tell us how you would use it. When you tell us how you would use it, that is a wealth of information.”

    Many educators and policymakers are also concerned that introducing AI into the classroom could endanger student data and privacy. Racial bias in grading could also be reinforced by AI programs, according to research by The Learning Agency.

    Additionally, Trevor Griffey, a lecturer in labor studies at the University of California Los Angeles, warned the New York Times that tech firms could use these deals to market AI tools to students and expand their customer base.

    This initiative to expand AI access and training for educators was likened to New Deal efforts in the 1930s to expand equal access to electricity by Chris Lehane, OpenAI’s chief global affairs officer. By working with teachers and expanding AI training, Lehane hopes the initiative will “democratize” access to AI.

    “There’s no better place to do that work than in the classroom,” he said at the Tuesday press conference.

    Chalkbeat is a nonprofit news site covering educational change in public schools.

    For more news on AI training, visit eSN’s Digital Learning hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • How the FY25 funding freeze impacts students across America

    How the FY25 funding freeze impacts students across America

    This press release originally appeared online.

    Key points:

    Communities across the nation began the budget process for the 2025-2026 school year after Congress passed the FY25 Continuing Resolution on March 14, 2025. Historically, states receive these funds on July 1, enabling them to allocate resources to local districts at the start of the fiscal year. 

    Even though these funds were approved by Congress, the Administration froze the distribution on June 30. Since that time, AASA, The School Superintendents Association, has advocated for their release, including organizing hundreds of superintendents to meet with offices on the Hill to share information about its impact, the week of July 7.  

    On July 16, the Office of Management and Budget (OMB) announced that Title IV-B or 21st Century funds (afterschool funds) would be released. AASA’s Executive Director issued a statement about the billions of dollars that remain frozen

    To gather more information about the real-world effects on students across America, AASA conducted a survey with its members. 

    From July 11th to July 18th, AASA received responses from 628 superintendents in 43 states.

    Eighty-five percent of respondents said they have existing contracts paid with federal funds that are currently being withheld, and now have to cover those costs with local dollars.

    Respondents shared what will be cut to cover this forced cost shift: 

    • Nearly three out of four respondents said they will have to eliminate academic services for students. The programs include targeted literacy and math coaches, before and after school programming, tutoring, credit recovery, CTE and dual enrollment opportunities.
    • Half of respondents reported they will have to lay off teachers and personnel. These personnel include those who work specifically with English-language learners and special education students, as well as staff who provide targeted reading and math interventions to struggling students.
    • Half of respondents said they will have to reduce afterschool and extracurricular offerings for students. These programs provide STEM/STEAM opportunities, performing arts and music programs, and AP coursework. 
    • Four out of five respondents indicated they will be forced to reduce or eliminate professional development offerings for educators. These funds are used to build teachers’ expertise such as training in the science of reading, teaching math, and the use of AI in the classroom. They are also used to ensure new teachers have the mentors and coaching they need to be successful.  

    As federal funding is still being withheld, 23 percent of respondents have been forced to make tough choices about how to reallocate funding, and many districts are rapidly approaching similar inflection points.  

    Notably, 29 percent of districts indicated that they must have access to these funds by August 1 to avoid cutting critical programs and services for students. Twenty-one percent of districts will have to notify parents and educators about the loss of programs and services by August 15.  

    Without timely disbursement of funding, the risk of disruption to essential educational supports for children grows significantly.

    As one superintendent who completed the survey said, “This isn’t a future problem; it’s happening now. Our budget was set with these funds in mind. Their sudden withholding has thrown us into chaos, forcing drastic measures that will negatively impact every student, classroom, and school in our district. We urgently need these funds released to prevent irreparable harm to our educational programs and ensure our students get the quality education they deserve.” 

    Latest posts by staff and wire services reports (see all)

    Source link

  • Navigating back-to-school anxiety: A K-12 success guide

    Navigating back-to-school anxiety: A K-12 success guide

    Key points:

    The anticipation of a new school year brings a complex mix of emotions for both students and teachers in K-12 education. As the 2025-2026 academic year approaches, experiencing anxiety about returning to the classroom is a natural response to change that affects everyone differently.

    From elementary students facing new classroom environments to high school teachers preparing for curriculum changes, these feelings manifest uniquely across age groups. Young children often worry about making new friends or adjusting to new teachers, while older students grapple with academic performance pressures and social dynamics. Teachers face their own challenges, including meeting diverse student needs, implementing new edtech tools and digital resources, and maintaining high academic standards while supporting student well-being.

    Early identification of anxiety symptoms is crucial for both educator and student success. Young children might express anxiety through behavioral changes, such as becoming more clingy or irritable, while older students might demonstrate procrastination or avoidance of school-related topics. Parents and educators should remain vigilant for signs like changes in sleeping patterns and/or eating habits, unusual irritability, or physical complaints. Schools must establish clear protocols for identifying and addressing anxiety-related concerns, including regular check-ins with students and staff and creating established pathways for accessing additional support when needed.

    Building strong support networks within the school community significantly reduces anxiety levels. Schools should foster an environment where students feel comfortable expressing concerns to teachers, counselors, or school psychologists. Regular check-ins, mentor programs, and peer support groups help create a supportive school environment where everyone feels valued and understood. Parent-teacher partnerships are essential for providing consistent support and understanding students’ needs, facilitated through regular communication channels, family engagement events, and resources that help parents support their children’s emotional well-being at home.

    Practical preparation serves as a crucial anxiety-reduction strategy. Teachers can minimize stress by organizing classrooms early, preparing initial lesson plans, and establishing routines before students arrive. Students can ease their transition by visiting the school beforehand, meeting teachers when possible, and organizing supplies. Parents contribute by establishing consistent routines at home, including regular sleep schedules and homework times, several weeks before school starts. Schools support this preparation through orientation events, virtual tours, welcome videos, and sharing detailed information about schedules and procedures well in advance.

    The importance of physical and emotional well-being cannot be overstated in managing school-related anxiety. Schools should prioritize regular physical activity through structured PE classes, recess, or movement breaks during lessons. Teaching age-appropriate stress-management techniques, such as deep breathing exercises for younger students or mindfulness practices for older ones, provides valuable tools for managing anxiety. Schools should implement comprehensive wellness programs addressing nutrition, sleep hygiene, and emotional regulation, while ensuring ready access to counselors and mental health professionals.

    Creating a positive classroom environment proves essential for reducing anxiety levels. Teachers can establish predictable routines, clear expectations, and open communication channels with students and parents. Regular class meetings or discussion times allow students to express concerns and help build community within the classroom. The physical space should consider lighting, noise levels, and seating arrangements that promote comfort and focus. Implementing classroom management strategies that emphasize positive reinforcement and restorative practices rather than punitive measures helps create a safe space where mistakes are viewed as learning opportunities.

    Technology integration requires careful consideration to prevent additional anxiety. Schools should provide adequate training and support for new educational technologies, introducing digital tools gradually while ensuring equitable access and understanding. Regular assessment of technology needs and challenges helps schools address barriers to effective use. Training should encompass basic operational skills, digital citizenship, online safety, and responsible social media use. Clear protocols for technology use and troubleshooting ensure that both students and teachers know where to turn for support when technical issues arise.

    Professional development for teachers should focus on managing both personal and student anxiety through trauma-informed teaching practices and social-emotional learning techniques. Schools must provide regular opportunities for skill enhancement throughout the year, incorporating both formal training sessions and informal peer learning opportunities. Creating professional learning communities allows teachers to share experiences, strategies, and support, while regular supervision and mentoring provide additional support layers.

    Long-term success requires commitment from all stakeholders–including administrators, teachers, support staff, students, and families–working together to create a supportive educational environment where everyone can thrive in the upcoming 2025-2026 school year.

    Latest posts by eSchool Media Contributors (see all)

    Source link