Category: Data and research

  • Some social emotional lessons improve how kids do at school, Yale study finds

    Some social emotional lessons improve how kids do at school, Yale study finds

    Social emotional learning — lessons in soft skills like listening to people you disagree with or calming yourself down before a test — has become a flashpoint in the culture wars. 

    The conservative political group Moms for Liberty opposes SEL, as it is often abbreviated, telling parents that its “goal is to psychologically manipulate students to accept the progressive ideology that supports gender fluidity, sexual preference exploration, and systemic oppression.” Critics say that parents should discuss social and emotional matters at home and that schools should stick to academics. Meanwhile, some advocates on the left say standard SEL classes don’t go far enough and should include such topics as social justice and anti-racism training. 

    While the political battle rages on, academic researchers are marshalling evidence for what high-quality SEL programs actually deliver for students. The latest study, by researchers at Yale University, summarizes 12 years of evidence, from 2008 to 2020, and it finds that 30 different SEL programs, which put themselves through 40 rigorous evaluations involving almost 34,000 students, tended to produce “moderate” academic benefits.

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    The meta-analysis, published online Oct. 8 in the peer-reviewed journal Review of Educational Research, calculated that the grades and test scores of students in SEL classes improved by about 4 percentile points, on average, compared with students who didn’t receive soft-skill instruction. That’s the equivalent of moving from the 50th percentile (in the middle) to the 54th percentile (slightly above average). Reading gains were larger (more than 6 percentile points) than math gains (fewer than 4 percentile points). Longer-duration SEL programs, extending more than four months, produced double the academic gains — more than 8 percentile points. 

    “Social emotional learning interventions are not designed, most of the time, to explicitly improve academic achievement,” said Christina Cipriano, one of the study’s four authors and an associate professor at Yale Medical School’s Child Study Center. “And yet we demonstrated, through our meta-analytic report, that explicit social emotional learning improved academic achievement and it improved both GPA and test scores.”

    Cipriano also directs the Education Collaboratory at Yale, whose mission is to “advance the science of learning and social and emotional development.”

    The academic boost from SEL in this 2025 paper is much smaller than the 11 percentile points documented in an earlier 2011 meta-analysis that summarized research through 2007, when SEL had not yet gained widespread popularity in schools. That has since changed. More than 80 percent of principals of K-12 schools said their schools used an SEL curriculum during the 2023-24 school year, according to a survey by the Collaborative for Academic, Social, and Emotional Learning (CASEL) and the RAND Corporation. 

    Related: A research update on social-emotional learning in schools

    The Yale researchers only studied a small subset of the SEL market, programs that subjected themselves to a rigorous evaluation and included academic outcomes. Three-quarters of the 40 studies were randomized-controlled trials, similar to pharmaceutical trials, where schools or teachers were randomly assigned to teach an SEL curriculum. The remaining studies, in which schools or teachers volunteered to participate, still had control groups of students so that researchers could compare the academic gains of students who did not receive SEL instruction. 

    The SEL programs in the Yale study taught a wide range of soft skills, from mindfulness and anger management to resolving conflicts and setting goals. It is unclear which soft skills are driving the academic gains. That’s an area for future research.

    “Developmentally, when we think about what we know about how kids learn, emotional regulation is really the driver,” said Cipriano. “No matter how good that curriculum or that math program or reading curriculum is, if a child is feeling unsafe or anxious or stressed out or frustrated or embarrassed, they’re not available to receive the instruction, however great that teacher might be.”

    Cipriano said that effective programs give students tools to cope with stressful situations. She offered the example of a pop quiz, from the perspective of a student. “You can recognize, I’m feeling nervous, my blood is rushing to my hands or my face, and I can use my strategies of counting to 10, thinking about what I know, and use positive self talk to be able to regulate, to be able to take my test,” she said.

    Related: A cheaper, quicker approach to social-emotional learning?

    The strongest evidence for SEL is in elementary school, where the majority of evaluations have been conducted (two-thirds of the 40 studies). For young students, SEL lessons tend to be short but frequent, for example, 10 minutes a day. There’s less evidence for middle and high school SEL programs because they haven’t been studied as much. Typically, preteens and teens have less frequent but longer sessions, a half hour or even 90 minutes, weekly or monthly. 

    Cipriano said that schools don’t need to spend “hours and hours” on social and emotional instruction in order to see academic benefits. A current trend is to incorporate or embed social and emotional learning within academic instruction, as part of math class, for example. But none of the underlying studies in this paper evaluated whether this was a more effective way to deliver SEL. All of the programs in this study were separate stand-alone SEL lessons. 

    Advice to schools

    Schools are inundated by sales pitches from SEL vendors. Estimates of the market size range wildly, but a half dozen market research firms put it above $2 billion annually. Not all SEL programs are necessarily effective or can be expected to produce the academic gains that the Yale team calculated. 

    Cipriano advises schools not to be taken in by slick marketing. Many of the effective programs have no marketing at all and some are free. Unfortunately, some of these programs have been discontinued or have transformed through ownership changes. But she says school leaders can ask questions about which specific skills the SEL program claims to foster, whether those skills will help the district achieve its goals, such as improving school climate, and whether the program has been externally evaluated. 

    “Districts invest in things all the time that are flashy and pretty, across content areas, not just SEL,” said Cipriano. “It may never have had an external evaluation, but has a really great social media presence and really great marketing.” 

    Cipriano has also built a new website, improvingstudentoutcomes.org, to track the latest research on SEL effectiveness and to help schools identify proven programs.

    Cipriano says parents should be asking questions too. “Parents should be partners in learning,” said Cipriano. “I have four kids, and I want to know what they’re learning about in school.”

    This meta-analysis probably won’t stop the SEL critics who say that these programs force educators to be therapists. Groups like Moms for Liberty, which holds its national summit this week, say teachers should stick to academics. This paper rejects that dichotomy because it suggests that emotions, social interaction and academics are all interlinked. 

    Before criticizing all SEL programs, educators and parents need to consider the evidence.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about SEL benefits was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Higher Education must help shape how students learn, lead and build the skills employers want most

    Higher Education must help shape how students learn, lead and build the skills employers want most

    For the first time in more than a decade, confidence in the nation’s colleges and universities is rising. Forty-two percent of Americans now say they have “a great deal” or “quite a lot” of confidence in higher education, up from 36 percent last year.  

    It’s a welcome shift, but it’s certainly not time for institutions to take a victory lap. 

    For years, persistent concerns about rising tuition, student debt and an uncertain job market have led many to question whether college was still worth the cost. Headlines have routinely spotlighted graduates who are underemployed, overwhelmed or unsure how to translate their degrees into careers.  

    With the rapid rise of AI reshaping entry-level hiring, those doubts are only going to intensify. Politicians, pundits and anxious parents are already asking: Why aren’t students better prepared for the real world?  

    But the conversation is broken, and the framing is far too simplistic. The real question isn’t whether college prepares students for careers. It’s how. And the “how” is more complex, personal and misunderstood than most people realize.  

    Related: Interested in innovations in higher education? Subscribe to our free biweekly higher education newsletter. 

    What’s missing from this conversation is a clearer understanding of where career preparation actually happens. It’s not confined to the classroom or the career center. It unfolds in the everyday often overlooked experiences that shape how students learn, lead and build confidence.  

    While earning a degree is important, it’s not enough. Students need a better map for navigating college. They need to know from day one that half the value of their experience will come from what they do outside the classroom.  

    To rebuild America’s trust, colleges must point beyond course catalogs and job placement rates. They need to understand how students actually spend their time in college. And they need to understand what those experiences teach them. 

    Ask someone thriving in their career which part of college most shaped their success, and their answer might surprise you. (I had this experience recently at a dinner with a dozen impressive philanthropic, tech and advocacy leaders.) You might expect them to name a major, a key class or an internship. But they’re more likely to mention running the student newspaper, leading a sorority, conducting undergraduate research, serving in student government or joining the debate team.  

    Such activities aren’t extracurriculars. They are career-curriculars. They’re the proving grounds where students build real-world skills, grow professional networks and gain confidence to navigate complexity. But most people don’t discuss these experiences until they’re asked about them.  

    Over time, institutions have created a false divide. The classroom is seen as the domain of learning, and career services is seen as the domain of workforce preparation. But this overlooks an important part of the undergraduate experience: everything in between.  

    The vast middle of campus life — clubs, competitions, mentorship, leadership roles, part-time jobs and collaborative projects — is where learning becomes doing. It’s where students take risks, test ideas and develop the communication, teamwork and problem-solving skills that employers need.  

    This oversight has made career services a stand-in for something much bigger. Career services should serve as an essential safety net for students who didn’t or couldn’t fully engage in campus life, but not as the launchpad we often imagine it to be. 

    Related: OPINION: College is worth it for most students, but its benefits are not equitable 

    We also need to confront a harder truth: Many students enter college assuming success after college is a given. Students are often told that going to college leads to success. They are rarely told, however, what that journey actually requires. They believe knowledge will be poured into them and that jobs will magically appear once the diploma is in hand. And for good reason, we’ve told them as much. 

    But college isn’t a vending machine. You can’t insert tuition and expect a job to roll out. Instead, it’s a platform, a laboratory and a proving ground. It requires students to extract value through effort, initiative and exploration, especially outside the classroom.  

    The credential matters, but it’s not the whole story. A degree can open doors, but it won’t define a career. It’s the skills students build, the relationships they form and the challenges they take on along the way to graduation that shape their future. 

    As more college leaders rightfully focus on the college-to-career transition, colleges must broadcast that while career services plays a helpful role, students themselves are the primary drivers of their future. But to be clear, colleges bear a grave responsibility here. It’s on us to reinforce the idea that learning occurs everywhere on campus, that the most powerful career preparation comes from doing, not just studying. It’s also on us to address college affordability, so that students have the time to participate in campus life, and to ensure that on-campus jobs are meaningful learning experiences.  

    Higher education can’t afford public confidence to dip again. The value of college isn’t missing. We’re just not looking in the right place. 

    Bridget Burns is the founding CEO of the University Innovation Alliance (UIA), a nationally recognized consortium of 19 public research universities driving student success innovation for nearly 600,000 students. 

    Contact the opinion editor at [email protected]. 

    This story about college experiences was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Learning to debate is an important facet of education, but too often public school students are left out 

    Learning to debate is an important facet of education, but too often public school students are left out 

    Ever since I first stepped onto the debate stage, I have been passionate about speech and debate. For the last three of my high school years, I have competed and placed nationally at major tournaments in Dallas, Los Angeles, Chicago, Atlanta and Las Vegas, among many others. Debate demands an incredible amount of research, preparation and practice, but those aren’t the biggest challenges for me.  

    I attend a public high school in California that lacks a formal debate program or coach, which has forced me to choose between quitting an activity I love and competing independently without any school support.  

    I chose the latter. And that means I prepare alone in the dark, navigate complex registration processes and, most importantly, pay hefty fees. 

    As many of us know, debate is an effective way to strengthen students’ comprehension, critical thinking and presentation skills. Debate allows students to explore ideas in a myriad of topics, from biotechnology to nuclear proliferation​​​​, and find their unique passions and interests. 

    Yet for many students, a lack of school support is a major entry barrier. It has turned debate into another private-school-dominated space, where private-school students receive access to higher quality research and on-the-spot coaching on argument structure and prose, like a football coach adjusting strategy on the sidelines. Additionally, most prestigious tournaments in the U.S. prohibit non-school-affiliated debaters like me from competing altogether.  

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.  

    These circumstances de facto prevent lower-income debaters from becoming successful in the activity. And that is why I believe that all schools should incorporate speech and debate classes into their core curriculums. Existing history and English teachers could act as debate coaches, as they do in many private schools. School districts could even combine programs across high schools to save resources while expanding access (Mountain View High School and Los Altos High School in California have pursued this strategy).  

    Over the past two decades, the debate community has engaged in efforts to democratize access to speech and debate through the creation of new formats (for example, public forum), local debate associations and urban debate leagues, among others.  

    However, many of these initiatives haven’t been successful. These newer formats, initially intended to lessen the research burden on debaters, have shifted toward emphasizing strict evidence standards and complex debate jargon. This shift has made debate less, not more, accessible, and led to more students from private schools — who were quickly able to ​​​​out-prepare those from public​​ schools — entering and dominating the competition.  

    Local debate associations and​​​​ competitive leagues for neighboring schools have provided more students with opportunities to participate. Still, debate via these organizations is limited, as they don’t provide direct coaching to member schools or rigorous opportunities for students, and prohibit certain students and programs from competing.  

    Similarly, urban debate leagues (for example, the Los Angeles Metropolitan ​​​​Debate League) have been incredibly successful in expanding debate access to lower-income and minority students; however, these programs are concentrated in major metropolitan cities, face opposition from some school districts and rely on donor funding, which can be uncertain.  

    In my debate rounds, I have analyzed pressing social problems such as global warming and economic inequality through a policymaking lens; in some rounds I defended increased wealth taxes, and in others I argued against bans on fossil fuels. Without debate, I wouldn’t be so conscious of the issues in my community. Now, as I enter college, I’m looking forward to continuing debate and leveraging my skills to fight for change.  

    Related: High school students find common ground on the debate stage 

    Speaking of college, in the competition for admission to the most selective colleges, extracurricular involvement can be a deciding factor, and debate is an excellent way to stand out, at least for those students with proper support.  

    However, when students from rural and low-income communities lack access to the same opportunities as students from more metropolitan and higher-income communities, we risk exacerbating the educational achievement gap to our collective detriment.  

    In the meantime, debate tournaments should reduce entry barriers for nontraditional debaters and for students from public schools without coaches and extra support.  

    Without these initiatives, too many rural and low-income students will be excluded from an amazing activity, one that is especially important in today’s polarizing and divisive climate.  

    Aayush Gandhi is a student at Dublin High School. He is an avid writer and nationally ranked Lincoln-Douglas debater.  

    Contact the opinion editor at [email protected].  

    This story about debate programs was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.  

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Education Department takes a preliminary step toward revamping its research and statistics arm

    Education Department takes a preliminary step toward revamping its research and statistics arm

    In his first two months in office, President Donald Trump ordered the closing of the Education Department and fired half of its staff. The department’s research and statistics division, called the Institute of Education Sciences (IES), was particularly hard hit. About 90 percent of its staff lost their jobs and more than 100 federal contracts to conduct its primary activities were canceled.

    But now there are signs that the Trump administration is partially reversing course and wants the federal government to retain a role in generating education statistics and evidence for what works in classrooms — at least to some extent. On Sept. 25, the department posted a notice in the Federal Register asking the public to submit feedback by Oct. 15 on reforming IES to make research more relevant to student learning. The department also asked for suggestions on how to collect data more efficiently.

    The timeline for revamping IES remains unclear, as is whether the administration will invest money into modernizing the agency. For example, it would take time and money to pilot new statistical techniques; in the meantime, statisticians would have to continue using current protocols.

    Still, the signs of rebuilding are adding up. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    At the end of May, the department announced that it had temporarily hired a researcher from the Thomas B. Fordham Institute, a conservative think tank, to recommend ways to reform education research and development. The researcher, Amber Northern, has been “listening” to suggestions from think tanks and research organizations, according to department spokeswoman Madi Biedermann, and now wants more public feedback.  

    Biedermann said that the Trump administration “absolutely” intends to retain a role in education research, even as it seeks to close the department. Closure will require congressional approval, which hasn’t happened yet. In the meantime, Biedermann said the department is looking across the government to find where its research and statistics activities “best fit.”

    Other IES activities also appear to be resuming. In June, the department disclosed in a legal filing that it had or has plans to reinstate 20 of the 101 terminated contracts. Among the activities slated to be restarted are 10 Regional Education Laboratories that partner with school districts and states to generate and apply evidence. It remains unclear how all 20 contracts can be restarted without federal employees to hold competitive bidding processes and oversee them. 

    Earlier in September, the department posted eight new jobs to help administer the National Assessment of Educational Progress (NAEP), also called the Nation’s Report Card. These positions would be part of IES’s statistics division, the National Center for Education Statistics. Most of the work in developing and administering tests is handled by outside vendors, but federal employees are needed to award and oversee these contracts. After mass firings in March, employees at the board that oversees NAEP have been on loan to the Education Department to make sure the 2026 NAEP test is on schedule.

    Only a small staff remains at IES. Some education statistics have trickled out since Trump took office, including its first release of higher education data on Sept. 23. But the data releases have been late and incomplete

    It is believed that no new grants have been issued for education studies since March, according to researchers who are familiar with the federal grant making process but asked not to be identified for fear of retaliation. A big obstacle is that a contract to conduct peer review of research proposals was canceled so new ideas cannot be properly vetted. The staff that remains is trying to make annual disbursements for older multi-year studies that haven’t been canceled. 

    Related: Chaos and confusion as the statistics arm of the Education Department is reduced to a skeletal staff of 3

    With all these changes, it’s becoming increasingly difficult to figure out the status of federally funded education research. One potential source of clarity is a new project launched by two researchers from George Washington University and Johns Hopkins University. Rob Olsen and Betsy Wolf, who was an IES researcher until March, are tracking cancellations and keeping a record of research results for policymakers. 

    If it’s successful, it will be a much-needed light through the chaos.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about reforming IES was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • The push to expand school choice should not diminish civic education

    The push to expand school choice should not diminish civic education

    From Texas to Florida to Arizona, school voucher policies are reshaping the landscape of American education. The Trump administration champions federal support for voucher expansion, and many state-level leaders are advancing school choice programs. Billions of public dollars are now flowing to private schools, church networks and microeducation platforms.  

    The push to expand school choice is not just reallocating public funds to private institutions. It is reorganizing the very purpose of schooling. And in that shift, something essential is being lost — the public mission of education as a foundation of democracy. 

    Civic education is becoming fragmented, underfunded and institutionally weak.  

    In this moment of sweeping change, as public dollars shift from common institutions to private and alternative schools, the shared civic entities that once supported democratic learning are being diminished or lost entirely — traditional structures like public schools, libraries and community colleges are no longer guaranteed common spaces. 

    The result is a disjointed system in which students may gain academic content or career preparation but receive little support in learning how to lead with integrity, think across differences or sustain democratic institutions. The very idea of public life is at risk, especially in places where shared experience has been replaced by polarization. We need civic education more than ever. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.  

    If we want students who can lead a multiracial democracy, we need schools of every type to take civic formation seriously. That includes religious schools, charter schools and homeschooling networks. The responsibility cannot fall on public schools alone. Civic formation is not an ideological project. It is a democratic one, involving the long-term work of building the skills, habits and values that prepare people to work across differences and take responsibility for shared democratic life. 

    What we need now is a civic education strategy that matches the scale of the changes reshaping American schooling. This will mean fostering coordinated investment, institutional partnerships and recognition that the stakes are not just academic, they are also democratic. 

    Americans overwhelmingly support civic instruction. According to a 2020 survey in Texas by the Center of Women in Politics and Public Policy and iCivics, just 49 percent of teachers statewide believed that enough time was being devoted to teaching civics knowledge, and just 23 percent said the same about participatory-democracy skills. This gap is not unique to Texas, but there is little agreement on how civics should be taught, and even less structural support for the schools trying to do it. 

    Without serious investment, civic formation will remain an afterthought — a patchwork effort disconnected from the design of most educational systems. 

    This is not an argument against vouchers in principle. Families should have options. But in the move to decentralize education, we risk hollowing out its civic core. A democratic society cannot survive on academic content alone. It requires citizens — not just in the legal sense, but in the civic one. 

    A democratic society needs people who can deliberate, organize, collaborate and build a shared future with others who do not think or live like they do. 

    And that’s why we are building a framework in Texas that others can adopt and adapt to their own civic mission. 

    The pioneering Democracy Schools model, to which I contribute, supports civic formation across a range of public and private schools, colleges, community organizations and professional networks.  

    Civic infrastructure is the term we use to describe our approach: the design of relationships, institutions and systems that hold democracy together. Just as engineers build physical infrastructure, educators and civic leaders must build civic infrastructure by working with communities, not for or on them. 

    We start from a democratic tradition rooted in the Black freedom struggle. Freedom, in this view, is not just protection from domination. It is the capacity to act, build and see oneself reflected in the world. This view of citizenship demands more than voice. It calls for the ability to shape institutions, policies and public narratives from the ground up. 

    Related: STUDENT VOICE: My generation knows less about civics than my parents’ generation did, yet we need it more than ever 

    The model speaks to a national crisis: the erosion of shared civic space in education. It must be practiced and must be supported by institutions that understand their role in building public life. Historically Black colleges and universities like Huston-Tillotson University offer a powerful example. They are not elite pipelines disconnected from everyday life. They are rooted in community, oriented toward public leadership and shaped by a history of democratic struggle. They show what it looks like to educate for civic capacity — not just for upward mobility. They remind us that education is not only about what students know, but about who they become and what kind of world they are prepared to help shape. 

    Our national future depends on how well we prepare young people to take responsibility for shared institutions and pluralistic public life. This cannot be accomplished through content standards alone. It requires civic ecosystems designed to cultivate public authorship. 

    We have an enormous stake in preparing the next generation for the demands of democratic life. What kind of society are we preparing young people to lead? The answer will not come from any single institution. It will come from partnerships across sectors, aligned in purpose even if diverse in approach. 

    We are eager to collaborate with any organization — public, private or faith-based — committed to building the civic infrastructure that sustains our democracy. Wherever education takes place, civic formation must remain a central concern. 

    Robert Ceresa is the founding director of the Politics Lab of the James L. Farmer House, Huston-Tillotson University. 

    Contact the opinion editor at [email protected].  

    This story about civic education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • A researcher’s view on using AI to become a better writer

    A researcher’s view on using AI to become a better writer

    Writing can be hard, equal parts heavy lifting and drudgery. No wonder so many students are turning to the time-saving allure of ChatGPT, which can crank out entire papers in seconds. It rescues them from procrastination jams and dreaded all-nighters, magically freeing up more time for other pursuits, like, say … doomscrolling.

    Of course, no one learns to be a better writer when someone else (or some AI bot) is doing the work for them. The question is whether chatbots can morph into decent writing teachers or coaches that students actually want to consult to improve their writing, and not just use for shortcuts.

    Maybe.

    Jennifer Meyer, an assistant professor at the University of Vienna in Austria, has been studying how AI bots can be used to improve student writing for several years. In an interview, she explained why she is cautious about the ability of AI to make us better writers and is still testing how to use the new technology effectively.

    All in the timing 

    Meyer says that just because ChatGPT is available 24/7 doesn’t mean students should consult it at the start of the writing process. Instead, Meyer believes that students would generally learn more if they wrote a first draft on their own. 

    That’s when AI could be most helpful, she thinks. With some prompting, a chatbot could provide immediate writing feedback targeted to each students’ needs. One student might need to practice writing shorter sentences. Another might be struggling with story structure and outlining. AI could theoretically meet an entire classroom’s individual needs faster than a human teacher. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    In Meyer’s experiments, she inserted AI only after the first draft was done as part of the revision process. In a study published in 2024, she randomly assigned 200 German high school students to receive AI feedback after writing a draft of an essay in English. Their revised essays were stronger than those of 250 students who were also told to revise, but didn’t get help from AI. 

    In surveys, those with AI feedback also said they felt more motivated to rewrite than those who didn’t get feedback. That motivation is critical. Often students aren’t in the mood to rewrite, and without revisions, students can’t become better writers.

    Meyer doesn’t consider her experiment proof that AI is a great writing teacher. She didn’t compare it with how student writing improved after human feedback. Her experiment compared only AI feedback with no feedback. 

    Most importantly, one dose of AI writing feedback wasn’t enough to elevate students’ writing skills. On a second, fresh essay topic, the students who had previously received AI feedback didn’t write any better than the students who hadn’t been helped by AI.

    Related: AI writing feedback ‘better than I thought,’ top researcher says

    It’s unclear how many rounds of AI feedback it would take to boost a student’s writing skills more permanently, not just help revise the essay at hand. 

    And Meyer doesn’t know whether a student would want to keep discussing writing with an AI bot over and over again. Maybe students were willing to engage with it in this experiment because it was a novelty, but could soon tire of it. That’s next on Meyer’s research agenda.

    A viral MIT study

    A much smaller MIT study published earlier this year echoes Meyer’s theory. “Your Brain on ChatGPT” went viral because it seemed to say that using ChatGPT to help write an essay made students’ brains less engaged. Researchers found that students who wrote an essay without any online tools had stronger brain connectivity and activity than students who used AI or consulted Google to search for source materials. (Using Google while writing wasn’t nearly as bad for the brain as AI.) 

    Although those results made headlines, there was more to the experiment. The students who initially wrote an essay on their own were later given ChatGPT to help improve their essays. That switch to ChatGPT boosted brain activity, in contrast to what the neuroscientists found during the initial writing process. 

    Related: University students offload critical thinking, other hard work to AI

    These studies add to the evidence that delaying AI a bit, after some initial thinking and drafting, could be a sweet spot in learning. That’s something researchers need to test more. 

    Still, Meyer remains concerned about giving AI tools to very weak writers and to young children who haven’t developed basic writing skills. “This could be a real problem,” said Meyer. “It could be detrimental to use these tools too early.”

    Cheating your way to learning?

    Meyer doesn’t think it’s always a bad idea for students to ask ChatGPT to do the writing for them. 

    Just as young artists learn to paint by copying masterpieces in museums, students might learn to write better by copying good writing. (The late great New Yorker editor John Bennet taught Jill to write this way. He called it “copy work” and he encouraged his journalism students to do it every week by copying longhand the words of legendary writers, not AI.)

    Meyer suggests that students ask ChatGPT to write a sample essay that meets their teacher’s assignment and grading criteria. The next step is key. If students pretend it’s their own piece and submit it, that’s cheating. They’ve also offloaded cognitive work to technology and haven’t learned anything.

    Related: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    But the AI essay can be an effective teaching tool, in theory, if students study the arguments, organizational structure, sentence construction and vocabulary before writing a new draft in their own words. Ideally, the next assignment should be better if students have learned through that analysis and internalized the style and techniques of the model essay, Meyer said. 

    “My hypothesis would be as long as there’s cognitive effort with it, as long as there’s a lot of time on task and like critical thinking about the output, then it should be fine,” said Meyer.

    Reconsidering praise

    Everyone likes a compliment. But too much praise can drown learning just as too much water can keep flowers from blooming.  

    ChatGPT has a tendency to pour the praise on thick and often begins with banal flattery, like “Great job!” even when a student’s writing needs a lot of work. In Meyer’s test of whether AI feedback can improve students’ writing, she intentionally told ChatGPT not to start with praise and instead go straight to constructive criticism.

    Her parsimonious approach to praise was inspired by a 2023 writing study about what motivates students to revise. The study found that when teachers started off with general praise, students were left with the false impression that their work was already good enough so they didn’t put in the extra effort to rewrite.

    Related: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    In Meyer’s experiment, the praise-free feedback was effective in getting students to revise and improve their essays. But she didn’t set up a direct competition between the two approaches — praise-free vs. praise-full — so we don’t know for sure which is more effective when students are interacting with AI.

    Being stingy with praise rubs real teachers the wrong way. After Meyer removed praise from the feedback, teachers told her they wanted to restore it. “They wondered about why the feedback was so negative,” Meyer said. “That’s not how they would do it.”

    Meyer and other researchers may one day solve the puzzle of how to turn AI chatbots into great writing coaches. But whether students will have the willpower or desire to forgo an instantly written essay is another matter. As long as ChatGPT continues to allow students to take the easy way out, it’s human nature to do so. 

    Shirley Liu is a graduate student in education at Northwestern University. Liu reported and wrote this story along with The Hechinger Report’s Jill Barshay.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about using AI to become a better writer was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • If we are going to build AI literacy into every level of learning, we must be able to measure it

    If we are going to build AI literacy into every level of learning, we must be able to measure it

    Everywhere you look, someone is telling students and workers to “learn AI.” 

    It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly. 

    And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.  

    Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice. 

    Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications. 

    Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

    What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely. 

    Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.  

    One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality. 

    But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable? 

    AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills. 

    Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment. 

    This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.  

    Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback. 

    Related: To employers,AIskills aren’t just for tech majors anymore 

    What can education leaders do right now to address this issue? I have a few ideas.  

    First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.  

    Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource. 

    Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs. 

    To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.  

    And that’s too shaky a foundation for the stakes ahead. 

    Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world. 

    Contact the opinion editor at [email protected]. 

    This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • A gender gap in STEM widened during the pandemic. Schools are trying to make up lost ground

    A gender gap in STEM widened during the pandemic. Schools are trying to make up lost ground

    IRVING, Texas — Crowded around a workshop table, four girls at de Zavala Middle School puzzled over a Lego machine they had built. As they flashed a purple card in front of a light sensor, nothing happened. 

    The teacher at the Dallas-area school had emphasized that in the building process, there are no such thing as mistakes. Only iterations. So the girls dug back into the box of blocks and pulled out an orange card. They held it over the sensor and the machine kicked into motion. 

    “Oh! Oh, it reacts differently to different colors,” said sixth grader Sofia Cruz.

    In de Zavala’s first year as a choice school focused on science, technology, engineering and math, the school recruited a sixth grade class that’s half girls. School leaders are hoping the girls will stick with STEM fields. In de Zavala’s higher grades — whose students joined before it was a STEM school — some elective STEM classes have just one girl enrolled. 

    Efforts to close the gap between boys and girls in STEM classes are picking up after losing steam nationwide during the chaos of the Covid pandemic. Schools have extensive work ahead to make up for the ground girls lost, in both interest and performance.

    In the years leading up to the pandemic, the gender gap nearly closed. But within a few years, girls lost all the ground they had gained in math test scores over the previous decade, according to an Associated Press analysis. While boys’ scores also suffered during Covid, they have recovered faster than girls, widening the gender gap.

    As learning went online, special programs to engage girls lapsed — and schools were slow to restart them. Zoom school also emphasized rote learning, a technique based on repetition that some experts believe may favor boys, instead of teaching students to solve problems in different ways, which may benefit girls. 

    Old practices and biases likely reemerged during the pandemic, said Michelle Stie, a vice president at the National Math and Science Initiative.

    “Let’s just call it what it is,” Stie said. “When society is disrupted, you fall back into bad patterns.”

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    In most school districts in the 2008-09 school year, boys had higher average math scores on standardized tests than girls, according to AP’s analysis, which looked at scores across 15 years in over 5,000 school districts. It was based on average test scores for third through eighth graders in 33 states, compiled by the Educational Opportunity Project at Stanford University. 

    A decade later, girls had not only caught up, they were ahead: Slightly more than half of districts had higher math averages for girls.

    Within a few years of the pandemic, the parity disappeared. In 2023-24, boys on average outscored girls in math in nearly 9 out of 10 districts.

    A separate study by NWEA, an education research company, found gaps between boys and girls in science and math on national assessments went from being practically non-existent in 2019 to favoring boys around 2022.

    Studies have indicated girls reported higher levels of anxiety and depression during the pandemic, plus more caretaking burdens than boys, but the dip in academic performance did not appear outside STEM. Girls outperformed boys in reading in nearly every district nationwide before the pandemic and continued to do so afterward.

    “It wasn’t something like Covid happened and girls just fell apart,” said Megan Kuhfeld, one of the authors of the NWEA study. 

    Related: These districts are bucking the national math slump 

    In the years leading up to the pandemic, teaching practices shifted to deemphasize speed, competition and rote memorization. Through new curriculum standards, schools moved toward research-backed methods that emphasized how to think flexibly to solve problems and how to tackle numeric problems conceptually.

    Educators also promoted participation in STEM subjects and programs that boosted girls’ confidence, including extracurriculars that emphasized hands-on learning and connected abstract concepts to real-life applications. 

    When STEM courses had large male enrollment, Superintendent Kenny Rodrequez noticed girls losing interest as boys dominated classroom discussions at his schools in Grandview C-4 District outside Kansas City. Girls were significantly more engaged after the district moved some of its introductory hands-on STEM curriculum to the lower grade levels and balanced classes by gender, he said.

    When schools closed for the pandemic, the district had to focus on making remote learning work. When in-person classes resumed, some of the teachers had left, and new ones had to be trained in the curriculum, Rodrequez said. 

    “Whenever there’s crisis, we go back to what we knew,” Rodrequez said. 

    Related: One state tried algebra for all eighth graders. It hasn’t gone well

    Despite shifts in societal perceptions, a bias against girls persists in science and math subjects, according to teachers, administrators and advocates. It becomes a message girls can internalize about their own abilities, they say, even at a very young age. 

    In his third grade classroom in Washington, D.C., teacher Raphael Bonhomme starts the year with an exercise where students break down what makes up their identity. Rarely do the girls describe themselves as good at math. Already, some say they are “not a math person.” 

    “I’m like, you’re 8 years old,” he said. “What are you talking about, ‘I’m not a math person?’” 

    Girls also may have been more sensitive to changes in instructional methods spurred by the pandemic, said Janine Remillard, a math education professor at the University of Pennsylvania. Research has found girls tend to prefer learning things that are connected to real-life examples, while boys generally do better in a competitive environment. 

    “What teachers told me during Covid is the first thing to go were all of these sense-making processes,” she said. 

    Related: OPINION: Everyone can be a math person but first we have to make math instruction more inclusive 

    At de Zavala Middle School in Irving, the STEM program is part of a push that aims to build curiosity, resilience and problem-solving across subjects.

    Coming out of the pandemic, Irving schools had to make a renewed investment in training for teachers, said Erin O’Connor, a STEM and innovation specialist there.

    The district last year also piloted a new science curriculum from Lego Education. The lesson involving the machine at de Zavala, for example, had students learn about kinetic energy. Fifth graders learned about genetics by building dinosaurs and their offspring with Lego blocks, identifying shared traits. 

    “It is just rebuilding the culture of, we want to build critical thinkers and problem solvers,” O’Connor said.

    Teacher Tenisha Willis recently led second graders at Irving’s Townley Elementary School through building a machine that would push blocks into a container. She knelt next to three girls who were struggling.

    They tried to add a plank to the wheeled body of the machine, but the blocks didn’t move enough. One girl grew frustrated, but Willis was patient. She asked what else they could try, whether they could flip some parts around. The girls ran the machine again. This time, it worked.

    “Sometimes we can’t give up,” Willis said. “Sometimes we already have a solution. We just have to adjust it a little bit.” 

    Lurye reported from Philadelphia. Todd Feathers contributed reporting from New York. 

    The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Nation’s Report Card at risk, researchers say

    Nation’s Report Card at risk, researchers say

    This story was reported by and originally published by APM Reports in connection with its podcast Sold a Story: How Teach Kids to Read Went So Wrong.

    When voters elected Donald Trump in November, most people who worked at the U.S. Department of Education weren’t scared for their jobs. They had been through a Trump presidency before, and they hadn’t seen big changes in their department then. They saw their work as essential, mandated by law, nonpartisan and, as a result, insulated from politics.

    Then, in early February, the Department of Government Efficiency showed up. Led at the time by billionaire CEO Elon Musk, and known by the cheeky acronym DOGE, it gutted the Department of Education’s Institute of Education Sciences, posting on X that the effort would ferret out “waste, fraud and abuse.”

    A post from the Department of Government Efficiency.

    When it was done, DOGE had cut approximately $900 million in research contracts and more than 90 percent of the institute’s workforce had been laid off. (The current value of the contracts was closer to $820 million, data compiled by APM Reports shows, and the actual savings to the government was substantially less, because in some cases large amounts of money had been spent already.)

    Among staff cast aside were those who worked on the National Assessment of Educational Progress — also known as the Nation’s Report Card — which is one of the few federal education initiatives the Trump administration says it sees as valuable and wants to preserve.

    The assessment is a series of tests administered nearly every year to a national sample of more than 10,000 students in grades 4, 8 and 12. The tests regularly measure what students across the country know in reading, math and other subjects. They allow the government to track how well America’s students are learning overall. Researchers can also combine the national data with the results of tests administered by states to draw comparisons between schools and districts in different states.

    The assessment is “something we absolutely need to keep,” Education Secretary Linda McMahon said at an education and technology summit in San Diego earlier this year. “If we don’t, states can be a little manipulative with their own results and their own testing. I think it’s a way that we keep everybody honest.”

    But researchers and former Department of Education employees say they worry that the test will become less and less reliable over time, because the deep cuts will cause its quality to slip — and some already see signs of trouble.

    “The main indication is that there just aren’t the staff,” said Sean Reardon, a Stanford University professor who uses the testing data to research gaps in learning between students of different income levels.

    All but one of the experts who make sure the questions in the assessment are fair and accurate — called psychometricians — have been laid off from the National Center for Education Statistics. These specialists play a key role in updating the test and making sure it accurately measures what students know.

    “These are extremely sophisticated test assessments that required a team of researchers to make them as good as they are,” said Mark Seidenberg, a researcher known for his significant contributions to the science of reading. Seidenberg added that “a half-baked” assessment would undermine public confidence in the results, which he described as “essentially another way of killing” the assessment.

    The Department of Education defended its management of the assessment in an email: “Every member of the team is working toward the same goal of maintaining NAEP’s gold-standard status,” it read in part.

    The National Assessment Governing Board, which sets policies for the national test, said in a statement that it had temporarily assigned “five staff members who have appropriate technical expertise (in psychometrics, assessment operations, and statistics) and federal contract management experience” to work at the National Center for Education Statistics. No one from DOGE responded to a request for comment.

    Harvard education professor Andrew Ho, a former member of the governing board, said the remaining staff are capable, but he’s concerned that there aren’t enough of them to prevent errors.

    “In order to put a good product up, you need a certain number of person-hours, and a certain amount of continuity and experience doing exactly this kind of job, and that’s what we lost,” Ho said.

    The Trump administration has already delayed the release of some testing data following the cutbacks. The Department of Education had previously planned to announce the results of the tests for 8th grade science, 12th grade math and 12th grade reading this summer; now that won’t happen until September. The board voted earlier this year to eliminate more than a dozen tests over the next seven years, including fourth grade science in 2028 and U.S. history for 12th graders in 2030. The governing board has also asked Congress to postpone the 2028 tests to 2029, citing a desire to avoid releasing test results in an election year. 

    “Today’s actions reflect what assessments the Governing Board believes are most valuable to stakeholders and can be best assessed by NAEP at this time, given the imperative for cost efficiencies,” board chair and former North Carolina Gov. Bev Perdue said earlier this year in a press release.

    The National Assessment Governing Board canceled more than a dozen tests when it revised the schedule for the National Assessment of Educational Progress in April. This annotated version of the previous schedule, adopted in 2023, shows which tests were canceled. Topics shown in all caps were scheduled for a potential overhaul; those annotated with a red star are no longer scheduled for such a revision.

    Recent estimates peg the annual cost to keep the national assessment running at about $190 million per year, a fraction of the department’s 2025 budget of approximately $195 billion.

    Adam Gamoran, president of the William T. Grant Foundation, said multiple contracts with private firms — overseen by Department of Education staff with “substantial expertise” — are the backbone of the national test.

    “You need a staff,” said Gamoran, who was nominated last year to lead the Institute of Education Sciences. He was never confirmed by the Senate. “The fact that NCES now only has three employees indicates that they can’t possibly implement NAEP at a high level of quality, because they lack the in-house expertise to oversee that work. So that is deeply troubling.”

    The cutbacks were widespread — and far outside of what most former employees had expected under the new administration.

    “I don’t think any of us imagined this in our worst nightmares,” said a former Education Department employee, who spoke on condition of anonymity for fear of retaliation by the Trump administration. “We weren’t concerned about the utter destruction of this national resource of data.”

    “At what point does it break?” the former employee asked.

    Related: Suddenly sacked

    Every state has its own test for reading, math and other subjects. But state tests vary in difficulty and content, which makes it tricky to compare results in Minnesota to Mississippi or Montana.

    “They’re totally different tests with different scales,” Reardon said. “So NAEP is the Rosetta stone that lets them all be connected.”

    Reardon and his team at Stanford used statistical techniques to combine the federal assessment results with state test scores and other data sets to create the Educational Opportunity Project. The project, first released in 2016 and updated periodically in the years that followed, shows which schools and districts are getting the best results — especially for kids from poor families. Since the project’s release, Reardon said, the data has been downloaded 50,000 times and is used by researchers, teachers, parents, school boards and state education leaders to inform their decisions.

    For instance, the U.S. military used the data to measure school quality when weighing base closures, and superintendents used it to find demographically similar but higher-performing districts to learn from, Reardon said.

    If the quality of the data slips, those comparisons will be more difficult to make.

    “My worry is we just have less-good information on which to base educational decisions at the district, state and school level,” Reardon said. “We would be in the position of trying to improve the education system with no information. Sort of like, ‘Well, let’s hope this works. We won’t know, but it sounds like a good idea.’”

    Seidenberg, the reading researcher, said the national assessment “provided extraordinarily important, reliable information about how we’re doing in terms of teaching kids to read and how literacy is faring in the culture at large.”

    Producing a test without keeping the quality up, Seidenberg said, “would be almost as bad as not collecting the data at all.”

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.



    Source link

  • Tutoring was supposed to save American kids after the pandemic. The results? ‘Sobering’

    Tutoring was supposed to save American kids after the pandemic. The results? ‘Sobering’

    Rigorous research rarely shows that any teaching approach produces large and consistent benefits for students. But tutoring seemed to be a rare exception. Before the pandemic, almost 100 studies pointed to impressive math or reading gains for students who were paired with a tutor at least three times a week and used a proven curriculum or set of lesson plans. 

    Some students gained an extra year’s worth of learning — far greater than the benefit of smaller classes, summer school or a fantastic teacher. These were rigorous randomized controlled trials, akin to the way that drugs or vaccines are tested, comparing test scores of tutored students against those who weren’t. The expense, sometimes surpassing $4,000 a year per student, seemed worth it for what researchers called high-dosage tutoring.

    On the strength of that evidence, the Biden administration urged schools to invest their pandemic recovery funds in intensive tutoring to help students catch up academically. Forty-six percent of public schools heeded that call, according to a 2024 federal survey, though it’s unclear exactly how much of the $190 billion in pandemic recovery funds have been spent on high-dosage tutoring and how many students received it. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Even with ample money, schools immediately reported problems in ramping up high-quality tutoring for so many students. In 2024, researchers documented either tiny or no academic benefits from large-scale tutoring efforts in Nashville, Tennessee, and Washington, D.C.

    New evidence from the 2023-24 school year reinforces those results. Researchers are rigorously studying large-scale tutoring efforts around the nation and testing whether effective tutoring can be done more cheaply. A dozen researchers studied more than 20,000 students in Miami; Chicago; Atlanta; Winston-Salem and Greensboro, North Carolina; Greenville, South Carolina; schools throughout New Mexico, and a California charter school network. This was also a randomized controlled study in which 9,000 students were randomly assigned to get tutoring and compared with 11,000 students who didn’t get that extra help.

    Their preliminary results were “sobering,” according to a June report by the University of Chicago Education Lab and MDRC, a research organization.

    The researchers found that tutoring during the 2023-24 school year produced only one or two months’ worth of extra learning in reading or math — a tiny fraction of what the pre-pandemic research had produced. Each minute of tutoring that students received appeared to be as effective as in the pre-pandemic research, but students weren’t getting enough minutes of tutoring altogether. “Overall we still see that the dosage students are getting falls far short of what would be needed to fully realize the promise of high-dosage tutoring,” the report said.

    Monica Bhatt, a researcher at the University of Chicago Education Lab and one of the report’s authors, said schools struggled to set up large tutoring programs. “The problem is the logistics of getting it delivered,” said Bhatt. Effective high-dosage tutoring involves big changes to bell schedules and classroom space, along with the challenge of hiring and training tutors. Educators need to make it a priority for it to happen, Bhatt said.

    Related: Students aren’t benefiting much from tutoring, one new study shows

    Some of the earlier, pre-pandemic tutoring studies involved large numbers of students, too, but those tutoring programs were carefully designed and implemented, often with researchers involved. In most cases, they were ideal setups. There was much greater variability in the quality of post-pandemic programs.

    “For those of us that run experiments, one of the deep sources of frustration is that what you end up with is not what you tested and wanted to see,” said Philip Oreopoulos, an economist at the University of Toronto, whose 2020 review of tutoring evidence influenced policymakers. Oreopoulos was also an author of the June report.

    “After you spend lots of people’s money and lots of time and effort, things don’t always go the way you hope. There’s a lot of fires to put out at the beginning or throughout because teachers or tutors aren’t doing what you want, or the hiring isn’t going well,” Oreopoulos said.

    Another reason for the lackluster results could be that schools offered a lot of extra help to everyone after the pandemic, even to students who didn’t receive tutoring. In the pre-pandemic research, students in the “business as usual” control group often received no extra help at all, making the difference between tutoring and no tutoring far more stark. After the pandemic, students — tutored and non-tutored alike — had extra math and reading periods, sometimes called “labs” for review and practice work. More than three-quarters of the 20,000 students in this June analysis had access to computer-assisted instruction in math or reading, possibly muting the effects of tutoring.

    Related: Tutoring may not significantly improve attendance

    The report did find that cheaper tutoring programs appeared to be just as effective (or ineffective) as the more expensive ones, an indication that the cheaper models are worth further testing. The cheaper models averaged $1,200 per student and had tutors working with eight students at a time, similar to small group instruction, often combining online practice work with human attention. The more expensive models averaged $2,000 per student and had tutors working with three to four students at once. By contrast, many of the pre-pandemic tutoring programs involved smaller 1-to-1 or 2-to-1 student-to-tutor ratios.

    Despite the disappointing results, researchers said that educators shouldn’t give up. “High-dosage tutoring is still a district or state’s best bet to improve student learning, given that the learning impact per minute of tutoring is largely robust,” the report concludes. The task now is to figure out how to improve implementation and increase the hours that students are receiving. “Our recommendation for the field is to focus on increasing dosage — and, thereby learning gains,” Bhatt said.

    That doesn’t mean that schools need to invest more in tutoring and saturate schools with effective tutors. That’s not realistic with the end of federal pandemic recovery funds.  

    Instead of tutoring for the masses, Bhatt said researchers are turning their attention to targeting a limited amount of tutoring to the right students. “We are focused on understanding which tutoring models work for which kinds of students.” 

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about tutoring effectiveness was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link