Category: Artificial Intelligence

  • Transform Your Instructional Coaching with Notebook LM Today

    Transform Your Instructional Coaching with Notebook LM Today

    Jeffrey D. Bradbury
    Latest posts by Jeffrey D. Bradbury (see all)

    TL:DR – Key Takeaways

    • NotebookLM for Instructional Coaches revolutionizes resource management by allowing coaches to use their specific materials instead of generic AI outputs.
    • The tool helps create professional development materials quickly, enabling coaches to synthesize various sources effortlessly.
    • NotebookLM offers unique features like audio overviews, video explanations, and infographics, enhancing the way coaches present information.
    • Coaches can organize notebooks by purpose, rename their sources for clarity, and customize responses for different audiences.
    • Joining communities like GEG helps coaches share strategies and stay updated on innovative practices using NotebookLM.

    As an instructional coach, you’re constantly juggling multiple responsibilities—supporting teachers, creating professional development materials, organizing resources, and staying current with educational technology. What if there was a tool that could help you synthesize information, create engaging content, and save hours of prep time? Enter NotebookLM, Google’s AI-powered workspace that’s revolutionizing how coaches work with information.

    What Makes NotebookLM Different for Coaches?

    Unlike general AI tools like ChatGPT or Gemini that pull from the entire web, NotebookLM gives you complete control over your sources. You choose exactly what information goes in—whether it’s your district’s strategic plan, professional development materials, curriculum documents, or teacher resources—and NotebookLM works exclusively with that content.

    This is a game-changer for instructional coaches. You’re not getting generic advice or hallucinated information. You’re getting insights, summaries, and resources based on your specific materials, aligned to your district’s goals, and tailored to your teachers’ needs.

    Real-World Applications for Instructional Coaches

    Creating PD Materials in Minutes

    Imagine this scenario: You’ve gathered resources about implementing Google Workspace tools in the classroom. You have PDFs, website links, video tutorials, and Google Docs with implementation guides. Instead of manually synthesizing all this information, you can upload these sources to NotebookLM and ask it to create a newsletter for teachers, generate a quick-start guide, or develop talking points for your next coaching session.

    One coach recently used NotebookLM to record a 40-minute lesson observation, uploaded the audio as a source, and asked it to create professional development slides with detailed presenter notes. The tool generated beautiful, comprehensive slides that captured the key teaching strategies demonstrated in that lesson—all without the coach spending hours creating materials from scratch.

    Building Notebooks for School Leaders

    Several coaches are now creating custom notebooks for their school leaders that include strategic plans, policy documents, and instructional frameworks. School leaders can then interact with these notebooks to get quick answers, generate reports, or explore connections between different initiatives—all while staying grounded in the district’s actual documents.

    Powerful Features That Save Coaches Time

    Audio Overviews (The Podcast Feature)

    One of NotebookLM’s most popular features creates AI-generated podcast discussions from your sources. Upload your curriculum materials, coaching protocols, or meeting notes, and NotebookLM will generate a conversational audio overview that makes complex information more digestible. These “deep dive” or “brief” options let you control the length and depth—perfect for sharing with busy teachers who prefer audio learning.

    Video Overviews with Visual Styles

    The newest feature generates explainer videos complete with visuals, making it easier to create engaging PD content. You can choose from multiple visual styles and customize what the video focuses on—ensuring the content stays relevant to your coaching goals rather than pulling in extraneous information.

    Infographics and Slide Decks

    Need to create professional-looking materials quickly? NotebookLM can generate infographics in landscape, portrait, or square formats, and create slide decks in both detailed and presenter modes. The image generation has improved dramatically, producing visuals that look polished and professional—often better than what many of us could create manually in the same timeframe.

    Smart Strategies for Instructional Coaches

    Organize by Purpose

    Should you create one massive notebook with all your coaching resources, or multiple smaller ones? Most coaches find success with focused notebooks organized by purpose—perhaps one for Google Workspace training, another for literacy coaching, and another for new teacher support. This approach allows you to keep sources relevant and responses targeted.

    Rename Your Sources

    When you upload documents, rename them to something meaningful for your audience. Instead of “Google_Docs_Editor_Help_Final_v3.pdf,” rename it to “How to Create a Google Doc for Teachers.” This becomes especially important when sharing notebooks with teachers who need to understand what sources are included.

    Customize for Your Audience

    The new “Configure Chat” feature lets you set how NotebookLM responds. You can create prompts that tell the tool to speak at a second-grade reading level, communicate with teachers who aren’t tech-savvy, or address cabinet-level administrators. This customization ensures the responses match your audience’s needs.

    Share Strategically

    In education domains, you can share notebooks within your district, either giving full access or chat-only access (with Google Workspace for Education Plus). This makes it easy to create resource hubs that teachers can explore independently, reducing the number of repeat questions you field.

    Ready to explore NotebookLM with fellow instructional coaches? Join the Google Educator Group (GEG) for Instructional Coaches—a global community of nearly 500 coaches who share strategies, resources, and support.

    Our community hosts monthly meetings, shares practical demonstrations, and provides ongoing support as you implement new tools and strategies in your coaching practice.

    Getting Started with NotebookLM

    The best way to understand NotebookLM’s potential is to experiment with it. Start small:

    1. Record a coaching conversation or PD session and upload the audio
    2. Gather 3-5 documents on a topic you’re currently coaching on
    3. Upload them to a new notebook and ask NotebookLM to summarize key themes
    4. Try the audio overview feature to see how it synthesizes your sources
    5. Share it with a trusted colleague for feedback

    Remember, this tool continues to evolve rapidly. Features that launched just weeks ago are already more powerful, and new capabilities are added regularly. The key is to start using it, share what works with your coaching community, and stay curious about new possibilities.

    Take Your Coaching Impact Further

    As you explore new tools like NotebookLM to enhance your coaching practice, consider diving deeper into frameworks that amplify your impact. My book, Impact Standards, provides actionable strategies for educators and coaches who want to make a lasting difference in their schools and districts.

    Get Impact Standards

    Want to stay connected and receive regular insights, tools, and strategies for instructional coaching? Subscribe to my newsletter for exclusive tips that will help you continue growing as an educational leader.


    The future of instructional coaching involves smart use of AI tools that amplify—not replace—the human connections at the heart of our work. NotebookLM is one more tool in your coaching toolkit, helping you spend less time on content creation and more time on the relationships and conversations that truly transform teaching and learning.

    Upgrade Your Teaching Toolkit Today

    Get weekly EdTech tips, tool tutorials, and podcast highlights delivered to your inbox. Plus, receive a free chapter from my book Impact Standards when you join.


    Discover more from TeacherCast Educational Network

    Subscribe to get the latest posts sent to your email.

    Source link

  • Advocates warn of risks to higher ed data if Education Department is shuttered

    Advocates warn of risks to higher ed data if Education Department is shuttered

    by Jill Barshay, The Hechinger Report
    November 10, 2025

    Even with the government shut down, lots of people are thinking about how to reimagine federal education research. Public comments on how to reform the Institute of Education Sciences (IES), the Education Department’s research and statistics arm, were due on Oct. 15. A total of 434 suggestions were submitted, but no one can read them because the department isn’t allowed to post them publicly until the government reopens. (We know the number because the comment entry page has an automatic counter.)

    A complex numbers game 

    There’s broad agreement across the political spectrum that federal education statistics are essential. Even many critics of the Department of Education want its data collection efforts to survive — just somewhere else. Some have suggested moving the National Center for Education Statistics (NCES) to another agency, such as the Commerce Department, where the U.S. Census Bureau is housed.

    But Diane Cheng, vice president of policy at the Institute for Higher Education Policy, a nonprofit organization that advocates for increasing college access and improving graduation rates, warns that shifting NCES risks the quality and usefulness of higher education data. Any move would have to be done carefully, planning for future interagency coordination, she said.

    “Many of the federal data collections combine data from different sources within ED,” Cheng said, referring to the Education Department. “It has worked well to have everyone within the same agency.”

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    She points to the College Scorecard, the website that lets families compare colleges by cost, student loan debt, graduation rates, and post-college earnings. It merges several data sources, including the Integrated Postsecondary Education Data System (IPEDS), run by NCES, and the National Student Loan Data System, housed in the Office of Federal Student Aid. Several other higher ed data collections on student aid and students’ pathways through college also merge data collected at the statistical unit with student aid figures. Splitting those across different agencies could make such collaboration far more difficult.

    “If those data are split across multiple federal agencies,” Cheng said, “there would likely be more bureaucratic hurdles required to combine the data.”

    Information sharing across federal agencies is notoriously cumbersome, the very problem that led to the creation of the Department of Homeland Security after 9/11.

    Hiring and $4.5 million in fresh research grants

    Even as the Trump administration publicly insists it intends to shutter the Department of Education, it is quietly rebuilding small parts of it behind the scenes.

    In September, the department posted eight new jobs to replace fired staff who oversaw the National Assessment of Educational Progress (NAEP), the biennial test of American students’ achievement. In November, it advertised four more openings for statisticians inside the Federal Student Aid Office. Still, nothing is expected to be quick or smooth. The government shutdown stalled hiring for the NAEP jobs, and now a new Trump administration directive to form hiring committees by Nov. 17 to approve and fill open positions may further delay these hires.

    At the same time, the demolition continues. Less than two weeks after the Oct. 1 government shutdown, 466 additional Education Department employees were terminated — on top of the roughly 2,000 lost since March 2025 through firings and voluntary departures. (The department employed about 4,000 at the start of the Trump administration.) A federal judge temporarily blocked these latest layoffs on Oct. 15.

    Related: Education Department takes a preliminary step toward revamping its research and statistics arm

    There are also other small new signs of life. On Sept. 30 — just before the shutdown — the department quietly awarded nine new research and development grants totaling $4.5 million. The grants, listed on the department’s website, are part of a new initiative called, “From Seedlings to Scale Grants Program” (S2S), launched by the Biden administration in August 2024 to test whether the Defense Department’s DARPA-style innovation model could work in education. DARPA, the Defense Advanced Research Projects Agency, invests in new technologies for national security. Its most celebrated project became the basis for the internet. 

    Each new project, mostly focused on AI-driven personalized learning, received $500,000 to produce early evidence of effectiveness. Recipients include universities, research organizations and ed tech firms. Projects that show promise could be eligible for future funding to scale up with more students.

    According to a person familiar with the program who spoke on background, the nine projects had been selected before President Donald Trump took office, but the formal awards were delayed amid the department’s upheaval. The Institute of Education Sciences — which lost roughly 90 percent of its staff — was one of the hardest hit divisions.

    Granted, $4.5 million is a rounding error compared with IES’s official annual budget of $800 million. Still, these are believed to be the first new federal education research grants of the Trump era and a faint signal that Washington may not be abandoning education innovation altogether.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about risks to federal education data was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-risks-higher-ed-data/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113283&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-risks-higher-ed-data/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • The new AI tools are fast but can’t replace the judgment, care and cultural knowledge teachers bring to the table

    The new AI tools are fast but can’t replace the judgment, care and cultural knowledge teachers bring to the table

    by Tanishia Lavette Williams, The Hechinger Report
    November 4, 2025

    The year I co-taught world history and English language arts with two colleagues, we were tasked with telling the story of the world in 180 days to about 120 ninth graders. We invited students to consider how texts and histories speak to one another: “The Analects” as imperial governance, “Sundiata” as Mali’s political memory, “Julius Caesar” as a window into the unraveling of a republic. 

    By winter, our students had given us nicknames. Some days, we were a triumvirate. Some days, we were Cerberus, the three-headed hound of Hades. It was a joke, but it held a deeper meaning. Our students were learning to make connections by weaving us into the histories they studied. They were building a worldview, and they saw themselves in it. 

    Designed to foster critical thinking, this teaching was deeply human. It involved combing through texts for missing voices, adapting lessons to reflect the interests of the students in front of us and trusting that learning, like understanding, unfolds slowly. That labor can’t be optimized for efficiency. 

    Yet, today, there’s a growing push to teach faster. Thousands of New York teachers are being trained to use AI tools for lesson planning, part of a $23 million initiative backed by OpenAI, Microsoft and Anthropic. The program promises to reduce teacher burnout and streamline planning. At the same time, a new private school in Manhattan is touting an AI-driven model that “speed-teaches” core subjects in just two hours of instruction each day while deliberately avoiding politically controversial issues. 

    Marketed as innovation, this stripped-down vision of education treats learning as a technical output rather than as a human process in which students ask hard questions and teachers cultivate the critical thinking that fuels curiosity. A recent analysis of AI-generated civics lesson plans found that they consistently lacked multicultural content and prompts for critical thinking. These AI tools are fast, but shallow. They fail to capture the nuance, care and complexity that deep learning demands. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.  

    When I was a teacher, I often reviewed lesson plans to help colleagues refine their teaching practices. Later, as a principal in Washington, D.C., and New York City, I came to understand that lesson plans, the documents connecting curriculum and achievement, were among the few steady examples of classroom practice. Despite their importance, lesson plans were rarely evaluated for their effectiveness.  

    When I wrote my dissertation, after 20 years of working in schools, lesson plan analysis was a core part of my research. Analyzing plans across multiple schools, I found that the activities and tasks included in lesson plans were reliable indicators of the depth of knowledge teachers required and, by extension, the limits of what students were asked to learn. 

    Reviewing hundreds of plans made clear that most lessons rarely offered more than a single dominant voice — and thus confined both what counted as knowledge and what qualified as achievement. Shifting plans toward deeper, more inclusive student learning required deliberate effort to incorporate primary sources, weave together multiple narratives and design tasks that push students beyond mere recall. 

     I also found that creating the conditions for such learning takes time. There is no substitute for that. Where this work took hold, students were making meaning, seeing patterns, asking why and finding themselves in the story. 

    That’s the transformation AI can’t deliver. When curriculum tools are trained on the same data that has long omitted perspectives, they don’t correct bias; they reproduce it. The developers of ChatGPT acknowledge that the model is “skewed toward Western views and performs best in English” and warn educators to review its content carefully for stereotypes and bias. Those same distortions appear at the systems level — a 2025 study in the World Journal of Advanced Research and Reviews found that biased educational algorithms can shape students’ educational paths and create new structural barriers. 

    Ask an AI tool for a lesson on westward expansion, and you’ll get a tidy narrative about pioneers and Manifest Destiny. Request a unit on the Civil Rights Movement and you may get a few lines on Martin Luther King Jr., but hardly a word about Ella Baker, Fannie Lou Hamer or the grassroots organizers who made the movement possible. Native nations, meanwhile, are reduced to footnotes or omitted altogether. 

    Curriculum redlining — the systematic exclusion or downplaying of entire histories, perspectives and communities — has already been embedded in educational materials for generations. So what happens when “efficiency” becomes the goal? Whose histories are deemed too complex, too political or too inconvenient to make the cut? 

    Related: What aspects of teaching should remain human? 

    None of this is theoretical. It’s already happening in classrooms across the country. Educators are under pressure to teach more with less: less time, fewer resources, narrower guardrails. AI promises relief but overlooks profound ethical questions. 

    Students don’t benefit from autogenerated worksheets. They benefit from lessons that challenge them, invite them to wrestle with complexity and help them connect learning to the world around them. That requires deliberate planning and professional judgment from a human who views education as a mechanism to spark inquiry. 

    Recently, I asked my students at Brandeis University to use AI to generate a list of individuals who embody concepts such as beauty, knowledge and leadership. The results, overwhelmingly white, male and Western, mirrored what is pervasive in textbooks.  

    My students responded with sharp analysis. One student created color palettes to demonstrate the narrow scope of skin tones generated by AI. Another student developed a “Missing Gender” summary to highlight omissions. It was a clear reminder that students are ready to think critically but require opportunities to do so.  

    AI can only do what it’s programmed to do, which means it draws from existing, stratified information and lags behind new paradigms. That makes it both backward-looking and vulnerable to reproducing bias.  

    Teaching with humanity, by contrast, requires judgment, care and cultural knowledge. These are qualities no algorithm can automate. When we surrender lesson planning to AI, we don’t just lose stories; we also lose the opportunity to engage with them. We lose the critical habits of inquiry and connection that teaching is meant to foster. 

    Tanishia Lavette Williams is the inaugural education stratification postdoctoral fellow at the Institute on Race, Power and Political Economy, a Kay fellow at Brandeis University and a visiting scholar at Harvard University. 

    Contact the opinion editor at [email protected].  

    This story about male AI and teaching was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.  

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-the-new-ai-tools-are-fast-but-cant-replace-the-judgment-care-and-cultural-knowledge-teachers-bring-to-the-table/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113191&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-the-new-ai-tools-are-fast-but-cant-replace-the-judgment-care-and-cultural-knowledge-teachers-bring-to-the-table/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Can journalists coexist with AI?

    Can journalists coexist with AI?

    But then the same thing could be happening now to the heads of news organizations who then subsequently pull back their journalists from various news beats. Since those news organizations are the ones who report news, would we ever know that was happening?

    The reality is that artificial intelligence could kill journalism without replacing it, leaving people without information they can rely on. When there are no reliable, credible sources of news, rumors spread and take on a life of their own. People panic and riot and revolt based on fears born from misinformation. Lawlessness prevails.

    Do algorithms have all the answers?

    Right now, entire news organizations are disappearing. The Brookings report found that last year some 2.5 local news outlets folded every week in the United States. Data collected by researcher Amy Watson in August 2023 found that in the UK, each year over a 10-year period ending 2022, more news outlets closed than were launched.

    CNN reported in June 2023 that Germany’s biggest news organization, Bild, was laying off 20% of its employees, replacing them with artificial intelligence.

    But ChatGPT had this to say: “ Rather than viewing AI as a threat, journalists can leverage technology to enhance their work. Automated tools can assist with tasks such as data analysis, fact-checking and content distribution, freeing up time for reporters to focus on more complex and impactful storytelling.”

    One of News Decoder’s many human correspondents, Tom Heneghan, spoke to students on this topic in November and expressed some optimism.

    “It will take away a lot of the drudge work, the donkey work that journalists have to do,” Heneghan said. “It’s amazing how much work is done by somebody at a much higher level than what is actually needed.”

    Working with artificial intelligence

    Once those tasks are automated, the journalist can pursue more substantive stories, Heneghan said. Plus the evolving sophistication of things like deep fake technology will make tasks like fact-checking and verification more important. “

    That’s going to come up more and more,” Heneghan said. “What artificial intelligence takes away may actually create some other jobs.”

    So here’s the thing: We wouldn’t have to fear AI eliminating the crucial role of journalism — informing the public with accurate information, reporting from multiple perspectives so that minority voices are heard and uncovering corruption, exploitation and oppression — if the businesses that controlled the purse strings of journalism were committed to its public service functions.

    I then asked ChatGPT this question: Are media corporations driven solely by money?

    It concluded: “While financial considerations undoubtedly influence the actions of media corporations, they are not the sole driving force behind their decisions.” It went on: “A complex interplay of financial goals, societal responsibilities and individual values shapes the behavior of these entities.

    Understanding this multifaceted nature is essential for accurately assessing the role and impact of media corporations in modern society.” I found that reassuring, until I glanced at the disclaimer at the bottom of the AI’s page:

    ChatGPT can make mistakes. Consider checking important information.


    Questions to consider:

    1. What is an essential role of journalism in society?

    2. What did both the ChatGPT app and the human correspondent seem to agree on in this article?

    3. What, if anything, worries you about artifiical intelligence and how you get your information?


    Source link

  • The Nature of Expertise in the Age of AI

    The Nature of Expertise in the Age of AI

    For several years, I’ve been providing content and student support for the University of Kentucky’s Changemakers program, designed and managed by the Center for Next Generation Leadership. It’s an online one year continuing education option where Kentucky educators can get a rank change for successful completion.  I appreciate that Next Gen believes in “parallel pedagogy”; while it provides valuable resources and materials to be read, viewed, and reflected on, it also requires the program’s students to complete meaningful transfer tasks, pursue an action research project, and participate in a final defense of learning that demonstrates how transformative practices are happening in the Changemaker’s own classroom. 

    This professional learning pathway to rank change involves mostly asynchronous work through online modules focused on the awareness and implementation of what Kentucky calls “vibrant learning” in the classroom, with module topics such as Learner Agency and Inquiry Based Learning.  It’s my contribution to the latter module where the content below originally began, but I’ve expanded and added more detail for this blog entry.

    Inquiry-based learning is a powerful pedagogy.  For students, it can be as extensive as working on a multi-week project-based learning unit, or as simple as asking more high-quality questions in class.  Inquiry comes from curiosity, and the attempt to answer challenging questions and solve problems that have no obvious solution.

    Complicated problems requires help.  Two heads are better than one, after all.  With this in mind, seeking community partners can make perfect sense.  (As an aside, this teacher guide can help shape your conversations when you attempt to bring the community into your classroom; while it mentions PBL, the strategies can help for any scale project or problem you want your students to tackle.)   
    These community partners or “outside experts” can authenticate what may seem abstract into real world problems, and even motivate students to “dig in” when the work gets difficult, to echo the title of this excellent Next Generation Learning Challenges article.  But before we consider how bringing in experts from outside of your classroom can increase vibrant learning, let’s first discuss inside experts, and even the idea of “expertise.”

    Keep in mind that traditionally, and for decades (centuries!), you have been considered to be the expert in the room – of your content, of your pedagogy, of your ability to manage your classroom.  The professionalism required of the vocation, much less the idea of professional standards boards that grant, review, and in some cases revoke certification to teach, adds to the foundational belief that a teacher has earned their well-deserved “expert” credentials.

    But you are usually one human in a room of thirty.  Leaning into the expertise of your students can be at its most basic level a strategy of smartly leveraging your numbers.  Viewing your classroom through an asset mindset, we can see students as learners that bring their own powerful POVs which can enrich your culture and community.  For example, with the right scaffolding, structures, and practice, your students are capable of providing peer-to-peer feedback.

    However, some of our stumbling blocks in education are self-induced, born out of a desire to remain humble.  For example, calling yourself or anyone else an “expert” can sound or feel lofty and divisive.  Educators are sometimes their own worst critic, and may wonder aloud what right they have to declare themselves the expert on such-and-such.  As for students, they may view their own bountiful and beautiful knowledge with a shrug of their shoulders.  If someone in middle school knows how to disassemble and reassemble a car engine, it simply reflects their personal interests, or the fact that their mother loves hot rods.  They are told early and often in traditional school that such knowledge isn’t “book learnin’.”  Loving hot rods or diesel mechanics doesn’t matter, thinks the student, because it’s not a part of my third period class, and it won’t show up on my multiple choice test on Friday.

    Therefore, let’s consider a broadening of our definition of “expert,” and look more at the first five letters of the word.  What we really hope to provide, increase, articulate and bring into a classroom is experience.  From another person’s POV, your experience may be long and traveled (which can make you “more experienced”), or simply a road I’ve never traveled upon (which makes your experience a novel one, compared to mine).   Viewing expertise in this kind of inclusive light opens up what an “expert” is.  We can see an expert as simply (but powerfully!) a person with a different, valued perspective.   The key word is “valued.”  You may have a different POV, you may have twelve degrees on the wall, but if I don’t care about you and especially if you don’t care about me, your “expertise” won’t matter much.  We can also see an expert as a person who is recognized as skillfully applying knowledge.  The key word is “applying.” Remember that old chestnut that answers the question, “What’s the difference between intelligence and wisdom?”  Intelligence is knowing that a tomato is a fruit, but wisdom is knowing you would never put a tomato in a fruit salad.  Expertise that feels too detached and theoretical, or a bunch of random facts you can Google anyway, won’t personally matter very much to your learners.

    With our new, more expansive definition of “expert” behind us, how do these experts from outside of a classroom still have potential to help?  Vibrant learning is memorable and authentic, and community partners can be both.  A parent who is a car mechanic might come in to demonstrate the torque caused by automobile engines.  Not only does that make abstract laws of motion and physics seem more relevant to students, it also has a far greater chance of making tonight’s dinner conversation when the student is asked “What did you do at school today?”  By permitting alternative voices into your learning space, you open up different perspectives and bring the outside community inside of your classroom community. Outside partners could also provide feedback to students as they ideate and prototype a solution in a PBL, or serve as a panel audience for defenses of learning. Of course, in a world full of wondrous technology, we are not limited to in-person guest speakers.  Someone from a European museum might Zoom in for a mini-lecture and a Q & A.  There are over twenty billion uploaded YouTube videos, so with the right discernment and curation skills, an expert is just a click away.

    You might have noticed that artificial intelligence wasn’t mentioned above as a potential “outside expert.”  Going back to our expanded definition, it certainly can seem to checkmark the same boxes.  AI can offer a different perspective, powered by code and fueled by billions of artifacts from our culture and knowledge.  Is that perspective valued, or valuable?  It might, although AI is not always accurate, unbiased, or trustworthy; however, the same can be said of Wikipedia entries created by humans, or the theory from a popular scientist of the past which has been discredited in the present.  Discernment and critical thinking is key, particularly from the teacher who should be monitoring, filtering, and observing the AI usage (and teaching students to be critical AI users as well).  AI can also certainly apply its knowledge scraped across the terabytes of the Internet within (milli)seconds of being prompted.  Is that knowledge skillfully applied?  Based on the uploaded rubric of a teacher alongside the first draft of a student’s essay (being mindful of your platform’s privacy protections, of course), or the public domain text of an author, AI could provide nuanced feedback on student writing or pretend to be a character in a book for a fascinating interactive conversation.  But some of the proficiency of AI’s application will depend on qualitative measures: of the rigor of the rubric you uploaded, or the veracity and bias of the knowledge it grabbed from its database, or the depth of skills the AI has been taught to emulate. And again, AI hallucinations can happen.  

    What will hopefully emerge, as we become more skilled and critical users of AI, is that our ethical priorities will shape the machines instead of letting the machines shape us.  A promising example is the “Dimensions in Testimony” website, a partnership between the University of Southern California and the Shoah Foundation.  The site began by digitizing recorded interviews of actual survivors of the Holocaust and the Nanjing Massacre.   Next, an interviewee has a separate page where, via a looping video, they seem to sit and wait for your questions.  

    When prompted, a short video plays where the interviewee “answers” your question, creating a virtual conversation.  You can do this via your microphone or by typing.  What may seem miraculous is really just clever programming – the interviews were transcripted and time-coded, so AI simply takes your prompt, scans the text, finds a corresponding clip that seems to best answer your question, and plays from that particular time-stamped portion of the interview. Still, you can see the power of providing such “expertise” to students, giving them a chance to be both empathetic as well as practicing their questioning/prompting skills.  (It should also be noticed the dignity and care given to the subject matter by USC and Shoah.  The interviews were real, using genuine survivors of genocide and the Holocaust, not actors.  While you technically could have AI “pretend” to be a survivor of a war crime as a customized chatbot, or have students interact with some kind of digital fictionalized Holocaust survivor avatar, there are many reasons why this would be an unethical and inappropriate use of such technology.)

    As you ponder ways to increase and improve inquiry, reflect on the nature of “expertise,” both inside and outside of your own four walls.  As you do so, you can cautiously consider how AI can be one of many types of “outside experts” you can bring into your classroom.

    For more information on Changemakers, be sure to check out this page for the latest links to sign up for updates and apply to join the next cohort.

    Source link

  • Students Love AI Chatbots — No, Really – The 74

    Students Love AI Chatbots — No, Really – The 74

    School (in)Security is our biweekly briefing on the latest school safety news, vetted by Mark KeierleberSubscribe here.

    The robots have taken over.

    New research suggests that a majority of students use chatbots like ChatGPT for just about everything at school. To write essays. To solve complicated math problems. To find love. 

    Wait, what? 

    Nearly a fifth of students said they or a friend have used artificial intelligence chatbots to form romantic relationships, according to a new survey by the nonprofit Center for Democracy & Technology. Some 42% said they or someone they know used the chatbots for mental health support, as an escape from real life or as a friend.

    Eighty-six percent of students say they’ve used artificial intelligence chatbots in the past academic year — half to help with schoolwork.

    The tech-enabled convenience, researchers conclude, doesn’t come without significant risks for young people. Namely, as AI proliferates in schools — with help from the federal government and a zealous tech industry — on a promise to improve student outcomes, they warn that young people could grow socially and emotionally disconnected from the humans in their lives. 


    In the news

    The latest in Trump’s immigration crackdown: The survey featured above, which quizzed students, teachers and parents, also offers startling findings on immigration enforcement in schools: 
    While more than a quarter of educators said their school collects information about whether a student is undocumented, 17% said their district shares records — including grades and disciplinary information — with immigration enforcement. 

    In the last school year, 13% of teachers said a staff member at their school reported a student or parent to immigration enforcement of their own accord. | Center for Democracy & Technology

    People hold signs as New York City officials speak at a press conference calling for the release of high school student Mamadou Mouctar Diallo outside of the Tweed Courthouse on Aug. 14 in New York City. (Michael M. Santiago/Getty Images)
    • Call for answers: In the wake of immigration enforcement that’s ensnared children, New York congressional Democrats are demanding the feds release information about the welfare of students held in detention, my colleague Jo Napolitano reports. | The 74
    • A 13-year-old boy from Brazil, who has lived in a Boston suburb since 2021 with a pending asylum application, was scooped up by Immigration and Customs Enforcement after local police arrested him on a “credible tip” accusing him of making “a violent threat” against a classmate at school. The boy’s mother said her son wound up in a Virginia detention facility and was “desperate, saying ICE had taken him.” | CNN
    • Chicago teenagers are among a group of activists patrolling the city’s neighborhoods to monitor ICE’s deployment to the city and help migrants avoid arrest. | NPR
    • Immigration agents detained a Chicago Public Schools vendor employee outside a school, prompting educators to move physical education classes indoors out of an “abundance of caution.” | Chicago Sun-Times
    • A Des Moines, Iowa, high schooler was detained by ICE during a routine immigration check-in, placed in a Louisiana detention center and deported to Central America fewer than two weeks later. | Des Moines Register
    • A 15-year-old boy with disabilities — who was handcuffed outside a Los Angeles high school after immigration agents mistook him for a suspect — is among more than 170 U.S. citizens, including nearly 20 children, who have been detained during the first nine months of the president’s immigration push. | PBS

    Trigger warning: After a Washington state teenager hanged himself on camera, the 13-year-old boy’s parents set out to find out what motivated their child to livestream his suicide on Instagram while online users watched. Evidence pointed to a sadistic online group that relies on torment, blackmail and coercion to weed out teens they deem weak. | The Washington Post

    Civil rights advocates in New York are sounding the alarm over a Long Island school district’s new AI-powered surveillance system, which includes round-the-clock audio monitoring with in-classroom microphones. | StateScoop

    A federal judge has ordered the Department of Defense to restock hundreds of books after a lawsuit alleged students were banned from checking out texts related to race and gender from school libraries on military bases in violation of the First Amendment. | Military.com

    More than 600 armed volunteers in Utah have been approved to patrol campuses across the state to comply with a new law requiring armed security. Called school guardians, the volunteers are existing school employees who agree to be trained by local law enforcement and carry guns on campus. | KUER

    Sign-up for the School (in)Security newsletter.

    Get the most critical news and information about students’ rights, safety and well-being delivered straight to your inbox.

    No “Jackass”: Instagram announced new PG-13 content features that restrict teenagers from viewing posts that contain sex, drugs and “risky stunts.” | The Associated Press

    A Tuscaloosa, Alabama, school resource officer restrained and handcuffed a county commissioner after a spat at an elementary school awards program. | Tuscaloosa News

    The number of guns found at Minnesota schools has increased nearly threefold in the last several years, new state data show. | Axios

    More than half of Florida’s school districts received bomb threats on a single evening last week. The threats weren’t credible, officials said, and appeared to be “part of a hoax intended to solicit money.” | News 6


    ICYMI @The74

    RAPID Survey Project, Stanford Center on Early Childhood

    Survey: Nearly Half of Families with Young Kids Struggling to Meet Basic Needs

    Education Department Leans on Right-Wing Allies to Push Civil Rights Probes

    OPINION: To Combat Polarization and Political Violence, Let’s Connect Students Nationwide


    Emotional Support

    Thanks for reading,
    —Marz


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

    These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

    As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

    What the evidence shows so far

    Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

    For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

    A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

    But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

    When AI backfires

    There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

    Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

    Not all AI tools work the same way

    The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

    We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

    There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

    These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

    Why it matters for all of us

    The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

    But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

    For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

    Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link

  • How UNE trained an AI-literate workforce – Campus Review

    How UNE trained an AI-literate workforce – Campus Review

    Almost all employees at the University of New England (UNE) use AI each day to augment tasks, despite the wider sector slowly adopting the tech into its workforce.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • A researcher’s view on using AI to become a better writer

    A researcher’s view on using AI to become a better writer

    Writing can be hard, equal parts heavy lifting and drudgery. No wonder so many students are turning to the time-saving allure of ChatGPT, which can crank out entire papers in seconds. It rescues them from procrastination jams and dreaded all-nighters, magically freeing up more time for other pursuits, like, say … doomscrolling.

    Of course, no one learns to be a better writer when someone else (or some AI bot) is doing the work for them. The question is whether chatbots can morph into decent writing teachers or coaches that students actually want to consult to improve their writing, and not just use for shortcuts.

    Maybe.

    Jennifer Meyer, an assistant professor at the University of Vienna in Austria, has been studying how AI bots can be used to improve student writing for several years. In an interview, she explained why she is cautious about the ability of AI to make us better writers and is still testing how to use the new technology effectively.

    All in the timing 

    Meyer says that just because ChatGPT is available 24/7 doesn’t mean students should consult it at the start of the writing process. Instead, Meyer believes that students would generally learn more if they wrote a first draft on their own. 

    That’s when AI could be most helpful, she thinks. With some prompting, a chatbot could provide immediate writing feedback targeted to each students’ needs. One student might need to practice writing shorter sentences. Another might be struggling with story structure and outlining. AI could theoretically meet an entire classroom’s individual needs faster than a human teacher. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    In Meyer’s experiments, she inserted AI only after the first draft was done as part of the revision process. In a study published in 2024, she randomly assigned 200 German high school students to receive AI feedback after writing a draft of an essay in English. Their revised essays were stronger than those of 250 students who were also told to revise, but didn’t get help from AI. 

    In surveys, those with AI feedback also said they felt more motivated to rewrite than those who didn’t get feedback. That motivation is critical. Often students aren’t in the mood to rewrite, and without revisions, students can’t become better writers.

    Meyer doesn’t consider her experiment proof that AI is a great writing teacher. She didn’t compare it with how student writing improved after human feedback. Her experiment compared only AI feedback with no feedback. 

    Most importantly, one dose of AI writing feedback wasn’t enough to elevate students’ writing skills. On a second, fresh essay topic, the students who had previously received AI feedback didn’t write any better than the students who hadn’t been helped by AI.

    Related: AI writing feedback ‘better than I thought,’ top researcher says

    It’s unclear how many rounds of AI feedback it would take to boost a student’s writing skills more permanently, not just help revise the essay at hand. 

    And Meyer doesn’t know whether a student would want to keep discussing writing with an AI bot over and over again. Maybe students were willing to engage with it in this experiment because it was a novelty, but could soon tire of it. That’s next on Meyer’s research agenda.

    A viral MIT study

    A much smaller MIT study published earlier this year echoes Meyer’s theory. “Your Brain on ChatGPT” went viral because it seemed to say that using ChatGPT to help write an essay made students’ brains less engaged. Researchers found that students who wrote an essay without any online tools had stronger brain connectivity and activity than students who used AI or consulted Google to search for source materials. (Using Google while writing wasn’t nearly as bad for the brain as AI.) 

    Although those results made headlines, there was more to the experiment. The students who initially wrote an essay on their own were later given ChatGPT to help improve their essays. That switch to ChatGPT boosted brain activity, in contrast to what the neuroscientists found during the initial writing process. 

    Related: University students offload critical thinking, other hard work to AI

    These studies add to the evidence that delaying AI a bit, after some initial thinking and drafting, could be a sweet spot in learning. That’s something researchers need to test more. 

    Still, Meyer remains concerned about giving AI tools to very weak writers and to young children who haven’t developed basic writing skills. “This could be a real problem,” said Meyer. “It could be detrimental to use these tools too early.”

    Cheating your way to learning?

    Meyer doesn’t think it’s always a bad idea for students to ask ChatGPT to do the writing for them. 

    Just as young artists learn to paint by copying masterpieces in museums, students might learn to write better by copying good writing. (The late great New Yorker editor John Bennet taught Jill to write this way. He called it “copy work” and he encouraged his journalism students to do it every week by copying longhand the words of legendary writers, not AI.)

    Meyer suggests that students ask ChatGPT to write a sample essay that meets their teacher’s assignment and grading criteria. The next step is key. If students pretend it’s their own piece and submit it, that’s cheating. They’ve also offloaded cognitive work to technology and haven’t learned anything.

    Related: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    But the AI essay can be an effective teaching tool, in theory, if students study the arguments, organizational structure, sentence construction and vocabulary before writing a new draft in their own words. Ideally, the next assignment should be better if students have learned through that analysis and internalized the style and techniques of the model essay, Meyer said. 

    “My hypothesis would be as long as there’s cognitive effort with it, as long as there’s a lot of time on task and like critical thinking about the output, then it should be fine,” said Meyer.

    Reconsidering praise

    Everyone likes a compliment. But too much praise can drown learning just as too much water can keep flowers from blooming.  

    ChatGPT has a tendency to pour the praise on thick and often begins with banal flattery, like “Great job!” even when a student’s writing needs a lot of work. In Meyer’s test of whether AI feedback can improve students’ writing, she intentionally told ChatGPT not to start with praise and instead go straight to constructive criticism.

    Her parsimonious approach to praise was inspired by a 2023 writing study about what motivates students to revise. The study found that when teachers started off with general praise, students were left with the false impression that their work was already good enough so they didn’t put in the extra effort to rewrite.

    Related: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    In Meyer’s experiment, the praise-free feedback was effective in getting students to revise and improve their essays. But she didn’t set up a direct competition between the two approaches — praise-free vs. praise-full — so we don’t know for sure which is more effective when students are interacting with AI.

    Being stingy with praise rubs real teachers the wrong way. After Meyer removed praise from the feedback, teachers told her they wanted to restore it. “They wondered about why the feedback was so negative,” Meyer said. “That’s not how they would do it.”

    Meyer and other researchers may one day solve the puzzle of how to turn AI chatbots into great writing coaches. But whether students will have the willpower or desire to forgo an instantly written essay is another matter. As long as ChatGPT continues to allow students to take the easy way out, it’s human nature to do so. 

    Shirley Liu is a graduate student in education at Northwestern University. Liu reported and wrote this story along with The Hechinger Report’s Jill Barshay.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about using AI to become a better writer was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • If we are going to build AI literacy into every level of learning, we must be able to measure it

    If we are going to build AI literacy into every level of learning, we must be able to measure it

    Everywhere you look, someone is telling students and workers to “learn AI.” 

    It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly. 

    And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.  

    Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice. 

    Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications. 

    Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

    What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely. 

    Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.  

    One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality. 

    But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable? 

    AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills. 

    Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment. 

    This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.  

    Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback. 

    Related: To employers,AIskills aren’t just for tech majors anymore 

    What can education leaders do right now to address this issue? I have a few ideas.  

    First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.  

    Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource. 

    Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs. 

    To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.  

    And that’s too shaky a foundation for the stakes ahead. 

    Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world. 

    Contact the opinion editor at [email protected]. 

    This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link