Tag: Artificial Intelligence

  • Advocates warn of risks to higher ed data if Education Department is shuttered

    Advocates warn of risks to higher ed data if Education Department is shuttered

    by Jill Barshay, The Hechinger Report
    November 10, 2025

    Even with the government shut down, lots of people are thinking about how to reimagine federal education research. Public comments on how to reform the Institute of Education Sciences (IES), the Education Department’s research and statistics arm, were due on Oct. 15. A total of 434 suggestions were submitted, but no one can read them because the department isn’t allowed to post them publicly until the government reopens. (We know the number because the comment entry page has an automatic counter.)

    A complex numbers game 

    There’s broad agreement across the political spectrum that federal education statistics are essential. Even many critics of the Department of Education want its data collection efforts to survive — just somewhere else. Some have suggested moving the National Center for Education Statistics (NCES) to another agency, such as the Commerce Department, where the U.S. Census Bureau is housed.

    But Diane Cheng, vice president of policy at the Institute for Higher Education Policy, a nonprofit organization that advocates for increasing college access and improving graduation rates, warns that shifting NCES risks the quality and usefulness of higher education data. Any move would have to be done carefully, planning for future interagency coordination, she said.

    “Many of the federal data collections combine data from different sources within ED,” Cheng said, referring to the Education Department. “It has worked well to have everyone within the same agency.”

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    She points to the College Scorecard, the website that lets families compare colleges by cost, student loan debt, graduation rates, and post-college earnings. It merges several data sources, including the Integrated Postsecondary Education Data System (IPEDS), run by NCES, and the National Student Loan Data System, housed in the Office of Federal Student Aid. Several other higher ed data collections on student aid and students’ pathways through college also merge data collected at the statistical unit with student aid figures. Splitting those across different agencies could make such collaboration far more difficult.

    “If those data are split across multiple federal agencies,” Cheng said, “there would likely be more bureaucratic hurdles required to combine the data.”

    Information sharing across federal agencies is notoriously cumbersome, the very problem that led to the creation of the Department of Homeland Security after 9/11.

    Hiring and $4.5 million in fresh research grants

    Even as the Trump administration publicly insists it intends to shutter the Department of Education, it is quietly rebuilding small parts of it behind the scenes.

    In September, the department posted eight new jobs to replace fired staff who oversaw the National Assessment of Educational Progress (NAEP), the biennial test of American students’ achievement. In November, it advertised four more openings for statisticians inside the Federal Student Aid Office. Still, nothing is expected to be quick or smooth. The government shutdown stalled hiring for the NAEP jobs, and now a new Trump administration directive to form hiring committees by Nov. 17 to approve and fill open positions may further delay these hires.

    At the same time, the demolition continues. Less than two weeks after the Oct. 1 government shutdown, 466 additional Education Department employees were terminated — on top of the roughly 2,000 lost since March 2025 through firings and voluntary departures. (The department employed about 4,000 at the start of the Trump administration.) A federal judge temporarily blocked these latest layoffs on Oct. 15.

    Related: Education Department takes a preliminary step toward revamping its research and statistics arm

    There are also other small new signs of life. On Sept. 30 — just before the shutdown — the department quietly awarded nine new research and development grants totaling $4.5 million. The grants, listed on the department’s website, are part of a new initiative called, “From Seedlings to Scale Grants Program” (S2S), launched by the Biden administration in August 2024 to test whether the Defense Department’s DARPA-style innovation model could work in education. DARPA, the Defense Advanced Research Projects Agency, invests in new technologies for national security. Its most celebrated project became the basis for the internet. 

    Each new project, mostly focused on AI-driven personalized learning, received $500,000 to produce early evidence of effectiveness. Recipients include universities, research organizations and ed tech firms. Projects that show promise could be eligible for future funding to scale up with more students.

    According to a person familiar with the program who spoke on background, the nine projects had been selected before President Donald Trump took office, but the formal awards were delayed amid the department’s upheaval. The Institute of Education Sciences — which lost roughly 90 percent of its staff — was one of the hardest hit divisions.

    Granted, $4.5 million is a rounding error compared with IES’s official annual budget of $800 million. Still, these are believed to be the first new federal education research grants of the Trump era and a faint signal that Washington may not be abandoning education innovation altogether.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about risks to federal education data was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-risks-higher-ed-data/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113283&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-risks-higher-ed-data/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • The new AI tools are fast but can’t replace the judgment, care and cultural knowledge teachers bring to the table

    The new AI tools are fast but can’t replace the judgment, care and cultural knowledge teachers bring to the table

    by Tanishia Lavette Williams, The Hechinger Report
    November 4, 2025

    The year I co-taught world history and English language arts with two colleagues, we were tasked with telling the story of the world in 180 days to about 120 ninth graders. We invited students to consider how texts and histories speak to one another: “The Analects” as imperial governance, “Sundiata” as Mali’s political memory, “Julius Caesar” as a window into the unraveling of a republic. 

    By winter, our students had given us nicknames. Some days, we were a triumvirate. Some days, we were Cerberus, the three-headed hound of Hades. It was a joke, but it held a deeper meaning. Our students were learning to make connections by weaving us into the histories they studied. They were building a worldview, and they saw themselves in it. 

    Designed to foster critical thinking, this teaching was deeply human. It involved combing through texts for missing voices, adapting lessons to reflect the interests of the students in front of us and trusting that learning, like understanding, unfolds slowly. That labor can’t be optimized for efficiency. 

    Yet, today, there’s a growing push to teach faster. Thousands of New York teachers are being trained to use AI tools for lesson planning, part of a $23 million initiative backed by OpenAI, Microsoft and Anthropic. The program promises to reduce teacher burnout and streamline planning. At the same time, a new private school in Manhattan is touting an AI-driven model that “speed-teaches” core subjects in just two hours of instruction each day while deliberately avoiding politically controversial issues. 

    Marketed as innovation, this stripped-down vision of education treats learning as a technical output rather than as a human process in which students ask hard questions and teachers cultivate the critical thinking that fuels curiosity. A recent analysis of AI-generated civics lesson plans found that they consistently lacked multicultural content and prompts for critical thinking. These AI tools are fast, but shallow. They fail to capture the nuance, care and complexity that deep learning demands. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.  

    When I was a teacher, I often reviewed lesson plans to help colleagues refine their teaching practices. Later, as a principal in Washington, D.C., and New York City, I came to understand that lesson plans, the documents connecting curriculum and achievement, were among the few steady examples of classroom practice. Despite their importance, lesson plans were rarely evaluated for their effectiveness.  

    When I wrote my dissertation, after 20 years of working in schools, lesson plan analysis was a core part of my research. Analyzing plans across multiple schools, I found that the activities and tasks included in lesson plans were reliable indicators of the depth of knowledge teachers required and, by extension, the limits of what students were asked to learn. 

    Reviewing hundreds of plans made clear that most lessons rarely offered more than a single dominant voice — and thus confined both what counted as knowledge and what qualified as achievement. Shifting plans toward deeper, more inclusive student learning required deliberate effort to incorporate primary sources, weave together multiple narratives and design tasks that push students beyond mere recall. 

     I also found that creating the conditions for such learning takes time. There is no substitute for that. Where this work took hold, students were making meaning, seeing patterns, asking why and finding themselves in the story. 

    That’s the transformation AI can’t deliver. When curriculum tools are trained on the same data that has long omitted perspectives, they don’t correct bias; they reproduce it. The developers of ChatGPT acknowledge that the model is “skewed toward Western views and performs best in English” and warn educators to review its content carefully for stereotypes and bias. Those same distortions appear at the systems level — a 2025 study in the World Journal of Advanced Research and Reviews found that biased educational algorithms can shape students’ educational paths and create new structural barriers. 

    Ask an AI tool for a lesson on westward expansion, and you’ll get a tidy narrative about pioneers and Manifest Destiny. Request a unit on the Civil Rights Movement and you may get a few lines on Martin Luther King Jr., but hardly a word about Ella Baker, Fannie Lou Hamer or the grassroots organizers who made the movement possible. Native nations, meanwhile, are reduced to footnotes or omitted altogether. 

    Curriculum redlining — the systematic exclusion or downplaying of entire histories, perspectives and communities — has already been embedded in educational materials for generations. So what happens when “efficiency” becomes the goal? Whose histories are deemed too complex, too political or too inconvenient to make the cut? 

    Related: What aspects of teaching should remain human? 

    None of this is theoretical. It’s already happening in classrooms across the country. Educators are under pressure to teach more with less: less time, fewer resources, narrower guardrails. AI promises relief but overlooks profound ethical questions. 

    Students don’t benefit from autogenerated worksheets. They benefit from lessons that challenge them, invite them to wrestle with complexity and help them connect learning to the world around them. That requires deliberate planning and professional judgment from a human who views education as a mechanism to spark inquiry. 

    Recently, I asked my students at Brandeis University to use AI to generate a list of individuals who embody concepts such as beauty, knowledge and leadership. The results, overwhelmingly white, male and Western, mirrored what is pervasive in textbooks.  

    My students responded with sharp analysis. One student created color palettes to demonstrate the narrow scope of skin tones generated by AI. Another student developed a “Missing Gender” summary to highlight omissions. It was a clear reminder that students are ready to think critically but require opportunities to do so.  

    AI can only do what it’s programmed to do, which means it draws from existing, stratified information and lags behind new paradigms. That makes it both backward-looking and vulnerable to reproducing bias.  

    Teaching with humanity, by contrast, requires judgment, care and cultural knowledge. These are qualities no algorithm can automate. When we surrender lesson planning to AI, we don’t just lose stories; we also lose the opportunity to engage with them. We lose the critical habits of inquiry and connection that teaching is meant to foster. 

    Tanishia Lavette Williams is the inaugural education stratification postdoctoral fellow at the Institute on Race, Power and Political Economy, a Kay fellow at Brandeis University and a visiting scholar at Harvard University. 

    Contact the opinion editor at [email protected].  

    This story about male AI and teaching was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.  

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-the-new-ai-tools-are-fast-but-cant-replace-the-judgment-care-and-cultural-knowledge-teachers-bring-to-the-table/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113191&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-the-new-ai-tools-are-fast-but-cant-replace-the-judgment-care-and-cultural-knowledge-teachers-bring-to-the-table/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Students Love AI Chatbots — No, Really – The 74

    Students Love AI Chatbots — No, Really – The 74

    School (in)Security is our biweekly briefing on the latest school safety news, vetted by Mark KeierleberSubscribe here.

    The robots have taken over.

    New research suggests that a majority of students use chatbots like ChatGPT for just about everything at school. To write essays. To solve complicated math problems. To find love. 

    Wait, what? 

    Nearly a fifth of students said they or a friend have used artificial intelligence chatbots to form romantic relationships, according to a new survey by the nonprofit Center for Democracy & Technology. Some 42% said they or someone they know used the chatbots for mental health support, as an escape from real life or as a friend.

    Eighty-six percent of students say they’ve used artificial intelligence chatbots in the past academic year — half to help with schoolwork.

    The tech-enabled convenience, researchers conclude, doesn’t come without significant risks for young people. Namely, as AI proliferates in schools — with help from the federal government and a zealous tech industry — on a promise to improve student outcomes, they warn that young people could grow socially and emotionally disconnected from the humans in their lives. 


    In the news

    The latest in Trump’s immigration crackdown: The survey featured above, which quizzed students, teachers and parents, also offers startling findings on immigration enforcement in schools: 
    While more than a quarter of educators said their school collects information about whether a student is undocumented, 17% said their district shares records — including grades and disciplinary information — with immigration enforcement. 

    In the last school year, 13% of teachers said a staff member at their school reported a student or parent to immigration enforcement of their own accord. | Center for Democracy & Technology

    People hold signs as New York City officials speak at a press conference calling for the release of high school student Mamadou Mouctar Diallo outside of the Tweed Courthouse on Aug. 14 in New York City. (Michael M. Santiago/Getty Images)
    • Call for answers: In the wake of immigration enforcement that’s ensnared children, New York congressional Democrats are demanding the feds release information about the welfare of students held in detention, my colleague Jo Napolitano reports. | The 74
    • A 13-year-old boy from Brazil, who has lived in a Boston suburb since 2021 with a pending asylum application, was scooped up by Immigration and Customs Enforcement after local police arrested him on a “credible tip” accusing him of making “a violent threat” against a classmate at school. The boy’s mother said her son wound up in a Virginia detention facility and was “desperate, saying ICE had taken him.” | CNN
    • Chicago teenagers are among a group of activists patrolling the city’s neighborhoods to monitor ICE’s deployment to the city and help migrants avoid arrest. | NPR
    • Immigration agents detained a Chicago Public Schools vendor employee outside a school, prompting educators to move physical education classes indoors out of an “abundance of caution.” | Chicago Sun-Times
    • A Des Moines, Iowa, high schooler was detained by ICE during a routine immigration check-in, placed in a Louisiana detention center and deported to Central America fewer than two weeks later. | Des Moines Register
    • A 15-year-old boy with disabilities — who was handcuffed outside a Los Angeles high school after immigration agents mistook him for a suspect — is among more than 170 U.S. citizens, including nearly 20 children, who have been detained during the first nine months of the president’s immigration push. | PBS

    Trigger warning: After a Washington state teenager hanged himself on camera, the 13-year-old boy’s parents set out to find out what motivated their child to livestream his suicide on Instagram while online users watched. Evidence pointed to a sadistic online group that relies on torment, blackmail and coercion to weed out teens they deem weak. | The Washington Post

    Civil rights advocates in New York are sounding the alarm over a Long Island school district’s new AI-powered surveillance system, which includes round-the-clock audio monitoring with in-classroom microphones. | StateScoop

    A federal judge has ordered the Department of Defense to restock hundreds of books after a lawsuit alleged students were banned from checking out texts related to race and gender from school libraries on military bases in violation of the First Amendment. | Military.com

    More than 600 armed volunteers in Utah have been approved to patrol campuses across the state to comply with a new law requiring armed security. Called school guardians, the volunteers are existing school employees who agree to be trained by local law enforcement and carry guns on campus. | KUER

    Sign-up for the School (in)Security newsletter.

    Get the most critical news and information about students’ rights, safety and well-being delivered straight to your inbox.

    No “Jackass”: Instagram announced new PG-13 content features that restrict teenagers from viewing posts that contain sex, drugs and “risky stunts.” | The Associated Press

    A Tuscaloosa, Alabama, school resource officer restrained and handcuffed a county commissioner after a spat at an elementary school awards program. | Tuscaloosa News

    The number of guns found at Minnesota schools has increased nearly threefold in the last several years, new state data show. | Axios

    More than half of Florida’s school districts received bomb threats on a single evening last week. The threats weren’t credible, officials said, and appeared to be “part of a hoax intended to solicit money.” | News 6


    ICYMI @The74

    RAPID Survey Project, Stanford Center on Early Childhood

    Survey: Nearly Half of Families with Young Kids Struggling to Meet Basic Needs

    Education Department Leans on Right-Wing Allies to Push Civil Rights Probes

    OPINION: To Combat Polarization and Political Violence, Let’s Connect Students Nationwide


    Emotional Support

    Thanks for reading,
    —Marz


    Did you use this article in your work?

    We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how

    Source link

  • Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Can AI Keep Students Motivated, Or Does it Do the Opposite? – The 74

    Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

    These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

    As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

    What the evidence shows so far

    Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

    For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

    A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

    But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

    When AI backfires

    There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

    Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

    Not all AI tools work the same way

    The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

    We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

    There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

    These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

    Why it matters for all of us

    The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

    But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

    For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

    Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link

  • A researcher’s view on using AI to become a better writer

    A researcher’s view on using AI to become a better writer

    Writing can be hard, equal parts heavy lifting and drudgery. No wonder so many students are turning to the time-saving allure of ChatGPT, which can crank out entire papers in seconds. It rescues them from procrastination jams and dreaded all-nighters, magically freeing up more time for other pursuits, like, say … doomscrolling.

    Of course, no one learns to be a better writer when someone else (or some AI bot) is doing the work for them. The question is whether chatbots can morph into decent writing teachers or coaches that students actually want to consult to improve their writing, and not just use for shortcuts.

    Maybe.

    Jennifer Meyer, an assistant professor at the University of Vienna in Austria, has been studying how AI bots can be used to improve student writing for several years. In an interview, she explained why she is cautious about the ability of AI to make us better writers and is still testing how to use the new technology effectively.

    All in the timing 

    Meyer says that just because ChatGPT is available 24/7 doesn’t mean students should consult it at the start of the writing process. Instead, Meyer believes that students would generally learn more if they wrote a first draft on their own. 

    That’s when AI could be most helpful, she thinks. With some prompting, a chatbot could provide immediate writing feedback targeted to each students’ needs. One student might need to practice writing shorter sentences. Another might be struggling with story structure and outlining. AI could theoretically meet an entire classroom’s individual needs faster than a human teacher. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    In Meyer’s experiments, she inserted AI only after the first draft was done as part of the revision process. In a study published in 2024, she randomly assigned 200 German high school students to receive AI feedback after writing a draft of an essay in English. Their revised essays were stronger than those of 250 students who were also told to revise, but didn’t get help from AI. 

    In surveys, those with AI feedback also said they felt more motivated to rewrite than those who didn’t get feedback. That motivation is critical. Often students aren’t in the mood to rewrite, and without revisions, students can’t become better writers.

    Meyer doesn’t consider her experiment proof that AI is a great writing teacher. She didn’t compare it with how student writing improved after human feedback. Her experiment compared only AI feedback with no feedback. 

    Most importantly, one dose of AI writing feedback wasn’t enough to elevate students’ writing skills. On a second, fresh essay topic, the students who had previously received AI feedback didn’t write any better than the students who hadn’t been helped by AI.

    Related: AI writing feedback ‘better than I thought,’ top researcher says

    It’s unclear how many rounds of AI feedback it would take to boost a student’s writing skills more permanently, not just help revise the essay at hand. 

    And Meyer doesn’t know whether a student would want to keep discussing writing with an AI bot over and over again. Maybe students were willing to engage with it in this experiment because it was a novelty, but could soon tire of it. That’s next on Meyer’s research agenda.

    A viral MIT study

    A much smaller MIT study published earlier this year echoes Meyer’s theory. “Your Brain on ChatGPT” went viral because it seemed to say that using ChatGPT to help write an essay made students’ brains less engaged. Researchers found that students who wrote an essay without any online tools had stronger brain connectivity and activity than students who used AI or consulted Google to search for source materials. (Using Google while writing wasn’t nearly as bad for the brain as AI.) 

    Although those results made headlines, there was more to the experiment. The students who initially wrote an essay on their own were later given ChatGPT to help improve their essays. That switch to ChatGPT boosted brain activity, in contrast to what the neuroscientists found during the initial writing process. 

    Related: University students offload critical thinking, other hard work to AI

    These studies add to the evidence that delaying AI a bit, after some initial thinking and drafting, could be a sweet spot in learning. That’s something researchers need to test more. 

    Still, Meyer remains concerned about giving AI tools to very weak writers and to young children who haven’t developed basic writing skills. “This could be a real problem,” said Meyer. “It could be detrimental to use these tools too early.”

    Cheating your way to learning?

    Meyer doesn’t think it’s always a bad idea for students to ask ChatGPT to do the writing for them. 

    Just as young artists learn to paint by copying masterpieces in museums, students might learn to write better by copying good writing. (The late great New Yorker editor John Bennet taught Jill to write this way. He called it “copy work” and he encouraged his journalism students to do it every week by copying longhand the words of legendary writers, not AI.)

    Meyer suggests that students ask ChatGPT to write a sample essay that meets their teacher’s assignment and grading criteria. The next step is key. If students pretend it’s their own piece and submit it, that’s cheating. They’ve also offloaded cognitive work to technology and haven’t learned anything.

    Related: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    But the AI essay can be an effective teaching tool, in theory, if students study the arguments, organizational structure, sentence construction and vocabulary before writing a new draft in their own words. Ideally, the next assignment should be better if students have learned through that analysis and internalized the style and techniques of the model essay, Meyer said. 

    “My hypothesis would be as long as there’s cognitive effort with it, as long as there’s a lot of time on task and like critical thinking about the output, then it should be fine,” said Meyer.

    Reconsidering praise

    Everyone likes a compliment. But too much praise can drown learning just as too much water can keep flowers from blooming.  

    ChatGPT has a tendency to pour the praise on thick and often begins with banal flattery, like “Great job!” even when a student’s writing needs a lot of work. In Meyer’s test of whether AI feedback can improve students’ writing, she intentionally told ChatGPT not to start with praise and instead go straight to constructive criticism.

    Her parsimonious approach to praise was inspired by a 2023 writing study about what motivates students to revise. The study found that when teachers started off with general praise, students were left with the false impression that their work was already good enough so they didn’t put in the extra effort to rewrite.

    Related: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    In Meyer’s experiment, the praise-free feedback was effective in getting students to revise and improve their essays. But she didn’t set up a direct competition between the two approaches — praise-free vs. praise-full — so we don’t know for sure which is more effective when students are interacting with AI.

    Being stingy with praise rubs real teachers the wrong way. After Meyer removed praise from the feedback, teachers told her they wanted to restore it. “They wondered about why the feedback was so negative,” Meyer said. “That’s not how they would do it.”

    Meyer and other researchers may one day solve the puzzle of how to turn AI chatbots into great writing coaches. But whether students will have the willpower or desire to forgo an instantly written essay is another matter. As long as ChatGPT continues to allow students to take the easy way out, it’s human nature to do so. 

    Shirley Liu is a graduate student in education at Northwestern University. Liu reported and wrote this story along with The Hechinger Report’s Jill Barshay.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about using AI to become a better writer was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • If we are going to build AI literacy into every level of learning, we must be able to measure it

    If we are going to build AI literacy into every level of learning, we must be able to measure it

    Everywhere you look, someone is telling students and workers to “learn AI.” 

    It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly. 

    And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.  

    Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice. 

    Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications. 

    Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

    What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely. 

    Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.  

    One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality. 

    But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable? 

    AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills. 

    Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment. 

    This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.  

    Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback. 

    Related: To employers,AIskills aren’t just for tech majors anymore 

    What can education leaders do right now to address this issue? I have a few ideas.  

    First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.  

    Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource. 

    Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs. 

    To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.  

    And that’s too shaky a foundation for the stakes ahead. 

    Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world. 

    Contact the opinion editor at [email protected]. 

    This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • AI can be a great equalizer, but it remains out of reach for millions of Americans; the Universal Service Fund can expand access

    AI can be a great equalizer, but it remains out of reach for millions of Americans; the Universal Service Fund can expand access

    In an age defined by digital transformation, access to reliable, high-speed internet is not a luxury; it is the bedrock of opportunity. It impacts the school classroom, the doctor’s office, the town square and the job market.

    As we stand on the cusp of a workforce revolution driven by the “arrival technology” of artificial intelligence, high-speed internet access has become the critical determinant of our nation’s economic future. Yet, for millions of Americans, this essential connection remains out of reach.

    This digital divide is a persistent crisis that deepens societal inequities, and we must rally around one of the most effective tools we have to combat it: the Universal Service Fund. The USF is a long-standing national commitment built on a foundation of bipartisan support and born from the principle that every American, regardless of their location or income, deserves access to communications services.

    Without this essential program, over 54 million students, 16,000 healthcare providers and 7.5 million high-need subscribers would lose internet service that connects classrooms, rural communities (including their hospitals) and libraries to the internet.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    The discussion about the future of USF has reached a critical juncture: Which communities will have access to USF, how it will be funded and whether equitable access to connectivity will continue to be a priority will soon be decided.

    Earlier this year, the Supreme Court found the USF’s infrastructure to be constitutional — and a backbone for access and opportunity in this country. Congress recently took a significant next step by relaunching a bicameral, bipartisan working group devoted to overhauling the fund. Now they are actively seeking input from stakeholders on how to best modernize this vital program for the future, and they need our input.

    I’m urging everyone who cares about digital equity to make their voices heard. The window for our input in support of this vital connectivity infrastructure is open through September 15.

    While Universal Service may appear as only a small fee on our monthly phone bills, its impact is monumental. The fund powers critical programs that form a lifeline for our nation’s most vital institutions and vulnerable populations. The USF helps thousands of schools and libraries obtain affordable internet — including the school I founded in downtown Brooklyn. For students in rural towns, the E-Rate program, funded by the USF, allows access to the same online educational resources as those available to students in major cities. In schools all over the country, the USF helps foster digital literacy, supports coding clubs and enables students to complete homework online.

    By wiring our classrooms and libraries, we are investing in the next generation of innovators.

    The coming waves of technological change — including the widespread adoption of AI — threaten to make the digital divide an unbridgeable economic chasm. Those on the wrong side of this divide experienced profound disadvantages during the pandemic. To get connected, students at my school ended up doing homework in fast-food parking lots. Entire communities lost vital connections to knowledge and opportunity when libraries closed.

    But that was just a preview of the digital struggle. This time, we have to fight to protect the future of this investment in our nation’s vital infrastructure to ensure that the rising wave of AI jobs, opportunities and tools is accessible to all.

    AI is rapidly becoming a fundamental tool for the American workforce and in the classroom. AI tools require robust bandwidth to process data, connect to cloud platforms and function effectively.

    The student of tomorrow will rely on AI as a personalized tutor that enhances teacher-led classroom instruction, explains complex concepts and supports their homework. AI will also power the future of work for farmers, mechanics and engineers.

    Related: Getting kids online by making internet affordable

    Without access to AI, entire communities and segments of the workforce will be locked out. We will create a new class of “AI have-nots,” unable to leverage the technology designed to propel our economy forward.

    The ability to participate in this new economy, to upskill and reskill for the jobs of tomorrow, is entirely dependent on the one thing the USF is designed to provide: reliable connectivity.

    The USF is also critical for rural health care by supporting providers’ internet access and making telehealth available in many communities. It makes internet service affordable for low-income households through its Lifeline program and the Connect America Fund, which promotes the construction of broadband infrastructure in rural areas.

    The USF is more than a funding mechanism; it is a statement of our values and a strategic economic necessity. It reflects our collective agreement that a child’s future shouldn’t be limited by their school’s internet connection, that a patient’s health outcome shouldn’t depend on their zip code and that every American worker deserves the ability to harness new technology for their career.

    With Congress actively debating the future of the fund, now is the time to rally. We must engage in this process, call on our policymakers to champion a modernized and sustainably funded USF and recognize it not as a cost, but as an essential investment in a prosperous, competitive and flourishing America.

    Erin Mote is the CEO and founder of InnovateEDU, a nonprofit that aims to catalyze education transformation by bridging gaps in data, policy, practice and research.

    Contact the opinion editor at [email protected].

    This story about the Universal Service Fund was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Students Increasingly Rely on Chatbots, but at What Cost? – The 74

    Students Increasingly Rely on Chatbots, but at What Cost? – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Students don’t have the same incentives to talk to their professors — or even their classmates — anymore. Chatbots like ChatGPT, Gemini and Claude have given them a new path to self-sufficiency. Instead of asking a professor for help on a paper topic, students can go to a chatbot. Instead of forming a study group, students can ask AI for help. These chatbots give them quick responses, on their own timeline.

    For students juggling school, work and family responsibilities, that ease can seem like a lifesaver. And maybe turning to a chatbot for homework help here and there isn’t such a big deal in isolation. But every time a student decides to ask a question of a chatbot instead of a professor or peer or tutor, that’s one fewer opportunity to build or strengthen a relationship, and the human connections students make on campus are among the most important benefits of college.

    Julia Freeland-Fisher studies how technology can help or hinder student success at the Clayton Christensen Institute. She said the consequences of turning to chatbots for help can compound.

    “Over time, that means students have fewer and fewer people in their corner who can help them in other moments of struggle, who can help them in ways a bot might not be capable of,” she said.

    As colleges further embed ChatGPT and other chatbots into campus life, Freeland-Fisher warns lost relationships may become a devastating unintended consequence.

    Asking for help

    Christian Alba said he has never turned in an AI-written assignment. Alba, 20, attends College of the Canyons, a large community college north of Los Angeles, where he is studying business and history. And while he hasn’t asked ChatGPT to write any papers for him, he has turned to the technology when a blank page and a blinking cursor seemed overwhelming. He has asked for an outline. He has asked for ideas to get him started on an introduction. He has asked for advice about what to prioritize first.

    “It’s kind of hard to just start something fresh off your mind,” Alba said. “I won’t lie. It’s a helpful tool.” Alba has wondered, though, whether turning to ChatGPT with these sorts of questions represents an overreliance on AI. But Alba, like many others in higher education, worries primarily about AI use as it relates to academic integrity, not social capital. And that’s a problem.

    Jean Rhodes, a psychology professor at the University of Massachusetts Boston, has spent decades studying the way college students seek help on campus and how the relationships formed during those interactions end up benefitting the students long-term. Rhodes doesn’t begrudge students integrating chatbots into their workflows, as many of their professors have, but she worries that students will get inferior answers to even simple-sounding questions, like, “how do I change my major?”

    A chatbot might point a student to the registrar’s office, Rhodes said, but had a student asked the question of an advisor, that person may have asked important follow-up questions — why the student wants the change, for example, which could lead to a deeper conversation about a student’s goals and roadblocks.

    “We understand the broader context of students’ lives,” Rhodes said. “They’re smart but they’re not wise, these tools.”

    Rhodes and one of her former doctoral students, Sarah Schwartz, created a program called Connected Scholars to help students understand why it’s valuable to talk to professors and have mentors. The program helped them hone their networking skills and understand what people get out of their networks over the course of their lives — namely, social capital.

    Connected Scholars is offered as a semester-long course at U Mass Boston, and a forthcoming paper examines outcomes over the last decade, finding students who take the course are three times more likely to graduate. Over time, Rhodes and her colleagues discovered that the key to the program’s success is getting students past an aversion to asking others for help.

    Students will make a plethora of excuses to avoid asking for help, Rhodes said, ticking off a list of them: “‘I don’t want to stand out,’ ‘I don’t want people to realize I don’t fit in here,’ ‘My culture values independence,’ ‘I shouldn’t reach out,’ ‘I’ll get anxious,’ ‘This person won’t respond.’ If you can get past that and get them to recognize the value of reaching out, it’s pretty amazing what happens.”

    Connections are key

    Seeking human help doesn’t only leave students with the resolution to a single problem, it gives them a connection to another person. And that person, down the line could become a friend, a mentor or a business partner — a “strong tie,” as social scientists describe their centrality to a person’s network. They could also become a “weak tie” who a student may not see often, but could, importantly, still offer a job lead or crucial social support one day.

    Daniel Chambliss, a retired sociologist from Hamilton College, emphasized the value of relationships in his 2014 book, “How College Works,” co-authored with Christopher Takacs. Over the course of their research, the pair found that the key to a successful college experience boiled down to relationships, specifically two or three close friends and one or two trusted adults. Hamilton College goes out of its way to make sure students can form those relationships, structuring work-study to get students into campus offices and around faculty and staff, making room for students of varying athletic abilities on sports teams, and more.

    Chambliss worries that AI-driven chatbots make it too easy to avoid interactions that can lead to important relationships. “We’re suffering epidemic levels of loneliness in America,” he said. “It’s a really major problem, historically speaking. It’s very unusual, and it’s profoundly bad for people.”

    As students increasingly turn to artificial intelligence for help and even casual conversation, Chambliss predicted it will make people even more isolated: “It’s one more place where they won’t have a personal relationship.”

    In fact, a recent study by researchers at the MIT Media Lab and OpenAI found that the most frequent users of ChatGPT — power users — were more likely to be lonely and isolated from human interaction.

    “What scares me about that is that Big Tech would like all of us to be power users,” said Freeland-Fisher. “That’s in the fabric of the business model of a technology company.”

    Yesenia Pacheco is preparing to re-enroll in Long Beach City College for her final semester after more than a year off. Last time she was on campus, ChatGPT existed, but it wasn’t widely used. Now she knows she’s returning to a college where ChatGPT is deeply embedded in students’ as well as faculty and staff’s lives, but Pacheco expects she’ll go back to her old habits — going to her professors’ office hours and sticking around after class to ask them questions. She sees the value.

    She understands why others might not. Today’s high schoolers, she has noticed, are not used to talking to adults or building mentor-style relationships. At 24, she knows why they matter.

    “A chatbot,” she said, “isn’t going to give you a letter of recommendation.”

    This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • More Than Half the States Have Issued AI Guidance for Schools – The 74

    More Than Half the States Have Issued AI Guidance for Schools – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Agencies in at least 28 states and the District of Columbia have issued guidance on the use of artificial intelligence in K-12 schools.

    More than half of the states have created school policies to define artificial intelligence, develop best practices for using AI systems and more, according to a report from AI for Education, an advocacy group that provides AI literacy training for educators.

    Despite efforts by the Trump administration to loosen federal and state AI rules in hopes of boosting innovation, teachers and students need a lot of state-level guidance for navigating the fast-moving technology, said Amanda Bickerstaff, the CEO and co-founder of AI for Education.

    “What most people think about when it comes to AI adoption in the schools is academic integrity,” she said. “One of the biggest concerns that we’ve seen — and one of the reasons why there’s been a push towards AI guidance, both at the district and state level — is to provide some safety guidelines around responsible use and to create opportunities for people to know what is appropriate.”

    North Carolina, which last year became one of the first states to issue AI guidance for schools, set out to study and define generative artificial intelligence for potential uses in the classroom. The policy also includes resources for students and teachers interested in learning how to interact with AI models successfully.

    In addition to classroom guidance, some states emphasize ethical considerations for certain AI models. Following Georgia’s initial framework in January, the state shared additional guidance in June outlining ethical principles educators should consider before adopting the technology.

    This year, Maine, Missouri, Nevada and New Mexico also released guidelines for AI in schools.

    In the absence of regulations at the federal level, states are filling a critical gap, said Maddy Dwyer, a policy analyst for the Equity in Civic Technology team at the Center for Democracy & Technology, a nonprofit working to advance civil rights in the digital age.

    While most state AI guidance for schools focuses on the potential benefits, risks and need for human oversight, Dwyer wrote in a recent blog post that many of the frameworks are missing out on critical AI topics, such as community engagement and deepfakes, or manipulated photos and videos.

    “I think that states being able to fill the gap that is currently there is a critical piece to making sure that the use of AI is serving kids and their needs, and enhancing their educational experiences rather than detracting from them,” she said.

    Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    I see many students viewing artificial intelligence as humanlike simply because it can write essays, do complex math or answer questions. AI can mimic human behavior but lacks meaningful engagement with the world.

    This disconnect inspired my course “Art and Generative AI,” which was shaped by the ideas of 20th-century German philosopher Martin Heidegger. His work highlights how we are deeply connected and present in the world. We find meaning through action, care and relationships. Human creativity and mastery come from this intuitive connection with the world. Modern AI, by contrast, simulates intelligence by processing symbols and patterns without understanding or care.

    In this course, we reject the illusion that machines fully master everything and put student expression first. In doing so, we value uncertainty, mistakes and imperfection as essential to the creative process.

    This vision expands beyond the classroom. In the 2025-26 academic year, the course will include a new community-based learning collaboration with Atlanta’s art communities. Local artists will co-teach with me to integrate artistic practice and AI.

    The course builds on my 2018 class, Art and Geometry, which I co-taught with local artists. The course explored Picasso’s cubism, which depicted reality as fractured from multiple perspectives; it also looked at Einstein’s relativity, the idea that time and space are not absolute and distinct but part of the same fabric.

    What does the course explore?

    We begin with exploring the first mathematical model of a neuron, the perceptron. Then, we study the Hopfield network, which mimics how our brain can remember a song from just listening to a few notes by filling in the rest. Next, we look at Hinton’s Boltzmann Machine, a generative model that can also imagine and create new, similar songs. Finally, we study today’s deep neural networks and transformers, AI models that mimic how the brain learns to recognize images, speech or text. Transformers are especially well suited for understanding sentences and conversations, and they power technologies such as ChatGPT.

    In addition to AI, we integrate artistic practice into the coursework. This approach broadens students’ perspectives on science and engineering through the lens of an artist. The first offering of the course in spring 2025 was co-taught with Mark Leibert, an artist and professor of the practice at Georgia Tech. His expertise is in art, AI and digital technologies. He taught students fundamentals of various artistic media, including charcoal drawing and oil painting. Students used these principles to create art using AI ethically and creatively. They critically examined the source of training data and ensured that their work respects authorship and originality.

    Students also learn to record brain activity using electroencephalography – EEG – headsets. Through AI models, they then learn to transform neural signals into music, images and storytelling. This work inspired performances where dancers improvised in response to AI-generated music.

    The Improv AI performance at Georgia Institute of Technology on April 15, 2025. Dancers improvised to music generated by AI from brain waves and sonified black hole data.

    Why is this course relevant now?

    AI entered our lives so rapidly that many people don’t fully grasp how it works, why it works, when it fails or what its mission is.

    In creating this course, the aim is to empower students by filling that gap. Whether they are new to AI or not, the goal is to make its inner algorithms clear, approachable and honest. We focus on what these tools actually do and how they can go wrong.

    We place students and their creativity first. We reject the illusion of a perfect machine, but we provoke the AI algorithm to confuse and hallucinate, when it generates inaccurate or nonsensical responses. To do so, we deliberately use a small dataset, reduce the model size or limit training. It’s in these flawed states of AI that students step in as conscious co-creators. The students are the missing algorithm that takes back control of the creative process. Their creations do not obey AI but reimagine it by the human hand. The artwork is rescued from automation.

    What’s a critical lesson from the course?

    Students learn to recognize AI’s limitations and harness its failures to reclaim creative authorship. The artwork isn’t generated by AI, but it’s reimagined by students.

    Students learn chatbot queries have an environmental cost because large AI models use a lot of power. They avoid unnecessary iterations when designing prompts or using AI. This helps reducing carbon emissions.

    The Improv AI performance on April 15, 2025, featured dancer Bekah Crosby responding to AI-generated music from brain waves.

    The course prepares students to think like artists. Through abstraction and imagination they gain the confidence to tackle the engineering challenges of the 21st century. These include protecting the environment, building resilient cities and improving health.

    Students also realize that while AI has vast engineering and scientific applications, ethical implementation is crucial. Understanding the type and quality of training data that AI uses is essential. Without it, AI systems risk producing biased or flawed predictions.

    Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link