Category: Proof Points

  • Do male teachers make a difference? Not as much as some think

    Do male teachers make a difference? Not as much as some think

    by Jill Barshay, The Hechinger Report
    November 17, 2025

    The teaching profession is one of the most female-dominated in the United States. Among elementary school teachers, 89 percent are women, and in kindergarten, that number is almost 97 percent.

    Many sociologists, writers and parents have questioned whether this imbalance hinders young boys at the start of their education. Are female teachers less understanding of boys’ need to horse around? Or would male role models inspire boys to learn their letters and times tables? Some advocates point to research that lays out why boys ought to do better with male teachers.

    But a new national analysis finds no evidence that boys perform or behave better with male teachers in elementary school. This challenges a widespread belief that boys thrive more when taught by men, and it raises questions about efforts, such as one in New York City, to spend extra to recruit them.

    “I was surprised,” said Paul Morgan, a professor at the University at Albany and a co-author of the study. “I’ve raised two boys, and my assumption would be that having male teachers is beneficial because boys tend to be more rambunctious, more active, a little less easy to direct in academic tasks.”

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    “We’re not saying gender matching doesn’t work,” Morgan added. “We’re saying we’re not observing it in K through fifth grade.”

    Middle and high school students might see more benefits. Earlier research is mixed and inconclusive. A 2007 analysis by Stanford professor Thomas Dee found academic benefits for eighth-grade boys and girls when taught by teachers of their same gender. And studies where researchers observe and interview a small number of students often show how students feel more supported by same-gender teachers. Yet many quantitative studies, like this newest one, have failed to detect measurable benefits for boys. At least 10 since 2014 have found zero or minimal effects. Benefits for girls are more consistent.

    This latest study, “Fixed Effect Estimates of Teacher-Student Gender Matching During Elementary School,” is a working paper not yet published in a peer-reviewed journal.* Morgan and co-author Eric Hu, a research scientist at Albany, shared a draft with me.

    Morgan and Hu analyzed a U.S. Education Department dataset that followed a nationally representative group of 8,000 students from kindergarten in 2010 through fifth grade in 2017. Half were boys and half were girls. 

    More than two-thirds — 68 percent — of the 4,000 boys never had a male teacher in those years while 32 percent had at least one. (The study focused only on main classroom teachers, not extras like gym or music.)

    Among the 1,300 boys who had both male and female teachers, the researchers compared each boy’s performance and behavior across those years. For instance, if Jacob had female teachers in kindergarten, first, second and fifth grades, but male teachers in third and fourth, his average scores and behavior were compared between the teachers of different genders.

    Related: Plenty of Black college students want to be teachers, but something keeps derailing them

    The researchers found no differences in reading, math or science achievement — or in behavioral and social measures. Teachers rated students on traits like impulsiveness, cooperation, anxiety, empathy and self-control. The children also took annual executive function tests. The results did not vary by the teacher’s gender.

    Most studies on male teachers focus on older students. The authors noted one other elementary-level study, in Florida, that also found no academic benefit for boys. This new research confirms that finding and adds that there seems to be no behavioral or social benefits either.

    For students at these young ages, 11 and under, the researchers also didn’t find academic benefits for girls with female teachers. But there were two non-academic ones: Girls taught by women showed stronger interpersonal skills (getting along, helping others, caring about feelings) and a greater eagerness to learn (represented by skills such as keeping organized and following rules).

    When the researchers combined race and gender, the results grew more complex. Black girls taught by women scored higher on an executive function test but lower in science. Asian boys taught by men scored higher on executive function but had lower ratings on interpersonal skills. Black boys showed no measurable differences when taught by male teachers. (Previous research has sometimes found benefits for Black students taught by Black teachers and sometimes hasn’t.)**

    Related: Bright black students taught by black teachers are more likely to get into gifted-and-talented classrooms

    Even if data show no academic or behavioral benefits for students, there may still be compelling reasons to diversify the teaching workforce, just as in other professions. But we shouldn’t expect these efforts to move the needle on student outcomes.

    “If you had scarce resources and were trying to place your bets,” Morgan said, “then based on this study, maybe elementary school isn’t where you should focus your recruitment efforts” to hire more men.

    To paraphrase Boyz II Men, it’s so hard to say goodbye — to the idea that young boys need male teachers.

    *Clarification: The article has not yet been published in a peer-reviewed journal but has undergone some peer review.

    **Correction: An earlier version incorrectly characterized how researchers analyzed what happened to students of different races. The researchers focused only on the gender of the teachers, but drilled down to see how students of different races responded to teachers of different genders. 

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about male teachers was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-male-teachers-elementary-school/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113362&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-male-teachers-elementary-school/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Advocates warn of risks to higher ed data if Education Department is shuttered

    Advocates warn of risks to higher ed data if Education Department is shuttered

    by Jill Barshay, The Hechinger Report
    November 10, 2025

    Even with the government shut down, lots of people are thinking about how to reimagine federal education research. Public comments on how to reform the Institute of Education Sciences (IES), the Education Department’s research and statistics arm, were due on Oct. 15. A total of 434 suggestions were submitted, but no one can read them because the department isn’t allowed to post them publicly until the government reopens. (We know the number because the comment entry page has an automatic counter.)

    A complex numbers game 

    There’s broad agreement across the political spectrum that federal education statistics are essential. Even many critics of the Department of Education want its data collection efforts to survive — just somewhere else. Some have suggested moving the National Center for Education Statistics (NCES) to another agency, such as the Commerce Department, where the U.S. Census Bureau is housed.

    But Diane Cheng, vice president of policy at the Institute for Higher Education Policy, a nonprofit organization that advocates for increasing college access and improving graduation rates, warns that shifting NCES risks the quality and usefulness of higher education data. Any move would have to be done carefully, planning for future interagency coordination, she said.

    “Many of the federal data collections combine data from different sources within ED,” Cheng said, referring to the Education Department. “It has worked well to have everyone within the same agency.”

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    She points to the College Scorecard, the website that lets families compare colleges by cost, student loan debt, graduation rates, and post-college earnings. It merges several data sources, including the Integrated Postsecondary Education Data System (IPEDS), run by NCES, and the National Student Loan Data System, housed in the Office of Federal Student Aid. Several other higher ed data collections on student aid and students’ pathways through college also merge data collected at the statistical unit with student aid figures. Splitting those across different agencies could make such collaboration far more difficult.

    “If those data are split across multiple federal agencies,” Cheng said, “there would likely be more bureaucratic hurdles required to combine the data.”

    Information sharing across federal agencies is notoriously cumbersome, the very problem that led to the creation of the Department of Homeland Security after 9/11.

    Hiring and $4.5 million in fresh research grants

    Even as the Trump administration publicly insists it intends to shutter the Department of Education, it is quietly rebuilding small parts of it behind the scenes.

    In September, the department posted eight new jobs to replace fired staff who oversaw the National Assessment of Educational Progress (NAEP), the biennial test of American students’ achievement. In November, it advertised four more openings for statisticians inside the Federal Student Aid Office. Still, nothing is expected to be quick or smooth. The government shutdown stalled hiring for the NAEP jobs, and now a new Trump administration directive to form hiring committees by Nov. 17 to approve and fill open positions may further delay these hires.

    At the same time, the demolition continues. Less than two weeks after the Oct. 1 government shutdown, 466 additional Education Department employees were terminated — on top of the roughly 2,000 lost since March 2025 through firings and voluntary departures. (The department employed about 4,000 at the start of the Trump administration.) A federal judge temporarily blocked these latest layoffs on Oct. 15.

    Related: Education Department takes a preliminary step toward revamping its research and statistics arm

    There are also other small new signs of life. On Sept. 30 — just before the shutdown — the department quietly awarded nine new research and development grants totaling $4.5 million. The grants, listed on the department’s website, are part of a new initiative called, “From Seedlings to Scale Grants Program” (S2S), launched by the Biden administration in August 2024 to test whether the Defense Department’s DARPA-style innovation model could work in education. DARPA, the Defense Advanced Research Projects Agency, invests in new technologies for national security. Its most celebrated project became the basis for the internet. 

    Each new project, mostly focused on AI-driven personalized learning, received $500,000 to produce early evidence of effectiveness. Recipients include universities, research organizations and ed tech firms. Projects that show promise could be eligible for future funding to scale up with more students.

    According to a person familiar with the program who spoke on background, the nine projects had been selected before President Donald Trump took office, but the formal awards were delayed amid the department’s upheaval. The Institute of Education Sciences — which lost roughly 90 percent of its staff — was one of the hardest hit divisions.

    Granted, $4.5 million is a rounding error compared with IES’s official annual budget of $800 million. Still, these are believed to be the first new federal education research grants of the Trump era and a faint signal that Washington may not be abandoning education innovation altogether.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about risks to federal education data was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-risks-higher-ed-data/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113283&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-risks-higher-ed-data/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • What research says about Mamdani and Cuomo’s education proposals

    What research says about Mamdani and Cuomo’s education proposals

    by Jill Barshay, The Hechinger Report
    November 3, 2025

    New York City, where I live, will elect a new mayor Tuesday, Nov. 4. The two front runners — state lawmaker Zohran Mamdani, the Democratic nominee, and former Gov. Andrew Cuomo, running as an independent — have largely ignored the city’s biggest single budget item: education. 

    One exception has been gifted education, which has generated a sharp debate between the two candidates. The controversy is over a tiny fraction of the student population. Only 18,000 students are in the city’s gifted and talented program out of more than 900,000 public school students. (Another 20,000 students attend the city’s elite exam-entrance high schools.) 

    But New Yorkers are understandably passionate about getting their kids into these “gated” classrooms, which have some of the best teachers in the city. Meanwhile, the racial composition of these separate (some say segregated) classes — disproportionately white and Asian — is shameful. Even many advocates of gifted education recognize that reform is needed. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Mamdani wants to end gifted programs for kindergarteners and wait until third grade to identify advanced students. Cuomo wants to expand gifted education and open up more seats for more children. 

    The primary justification for gifted programs is that some children learn so quickly that they need separate classrooms to progress at an accelerated pace. 

    But studies have found that students in gifted classrooms are not learning faster than their general education peers. And analyses of curricula show that many gifted classes don’t actually teach more advanced material; they simply group mostly white and Asian students together without raising academic rigor.

    In my reporting, I have found that researchers question whether we can accurately spot giftedness in 4- or 5-year-olds. My colleague Sarah Carr recently wrote about the many methods that have been used to try to identify young children with high potential, and how the science underpinning them is shaky. In addition, true giftedness is often domain-specific — a child might be advanced in math but not in reading, or vice versa — yet New York City’s system labels or excludes children globally rather than by subject. 

    Because of New York City’s size — it’s the nation’s largest public school system, even larger than 30 states — what happens here matters.

    Policy implications

    • Delaying identification until later grades, when cognitive profiles are clearer, could improve accuracy in picking students. 
    • Reforming the curriculum to make sure that gifted classes are truly advanced would make it easier to justify having them. 
    • Educators could consider ways for children to accelerate in a single subject — perhaps by moving up a grade in math or English classes. 
    • How to desegregate these classrooms, and make their racial/ethnic composition less lopsided, remains elusive.

    I’ve covered these questions before. Read my columns on gifted education:

    Size isn’t everything

    Another important issue in this election is class size. Under a 2022 state law, New York City must reduce class sizes to no more than 20 students in grades K-3 by 2028. (The cap will be 23 students per class in grades 4-8 and 25 students per class in high school.) To meet that mandate, the city will need to hire an estimated 18,000 new teachers.

    During the campaign, Mamdani said he would subsidize teacher training, offering tuition aid in exchange for a three-year commitment to teach in the city’s public schools. The idea isn’t unreasonable, but it’s modest — only $12 million a year, expected to produce about 1,000 additional teachers annually. That’s a small fraction of what’s needed.

    The bigger problem may be the law itself: Schools lack both physical space and enough qualified teachers. What parents want — small classes led by excellent, experienced educators — isn’t something the city can scale quickly. Hiring thousands of novices may not improve learning much, and will make the job of school principal, who must make all these hires, even harder.

    For more on the research behind class-size reductions, see my earlier columns:

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about education issues in the New York City mayoral election was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-nyc-mayor-election-education/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113202&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-nyc-mayor-election-education/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Why one reading expert says ‘just-right’ books are all wrong

    Why one reading expert says ‘just-right’ books are all wrong

    by Jill Barshay, The Hechinger Report
    October 27, 2025

    Timothy Shanahan, a professor emeritus at the University of Illinois at Chicago, has spent his career evaluating education research and helping teachers figure out what works best in the classroom. A leader of the National Reading Panel, whose 2000 report helped shape what’s now known as the “science of reading,” Shanahan has long influenced literacy instruction in the United States. He also served on the National Institute for Literacy’s advisory board in both the George W. Bush and Barack Obama administrations.

    Shanahan is a scholar whom I regularly consult when I come across a reading study, and so I was eager to interview him about his new book, “Leveled Reading, Leveled Lives.” (Harvard Education Press, September 2025). In it, Shanahan takes aim at one of the most common teaching practices in American classrooms: matching students with “just-right” books. 

    He argues that the approach — where students read different texts depending on their assessed reading level — is holding many children back. Teachers spend too much time testing students and assigning leveled books, he says, instead of helping all students learn how to understand challenging texts.

    “American children are being prevented from doing better in reading by a longstanding commitment to a pedagogical theory that insists students are best taught with books they can already read,” Shanahan writes in his book. “Reading is so often taught in small groups — not so teachers can guide efforts to negotiate difficult books, but to ensure the books are easy enough that not much guidance is needed.”

    Comprehension, he says, doesn’t grow that way.

    The trouble with leveled reading

    Grouping students by ability and assigning easier or harder books — a practice known as leveled reading — remains deeply embedded in U.S. schools. A 2018 Thomas B. Fordham Institute survey found that 62 percent of upper elementary teachers and more than half of middle school teachers teach at students’ reading level rather than at grade level.  

    That may sound sensible, but Shanahan says it’s not helping anyone and is even leading teachers to dispense with reading altogether. “In social studies and science, and these days, even in English classes,” he said in an interview, “teachers either don’t assign any readings or they read the texts to the students.” Struggling readers aren’t being given the chance — or the tools — to tackle complex material on their own.

    Instead, Shanahan believes all students should read grade-level texts together, with teachers providing more support for those who need it.

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    “What I’m recommending is instructional differentiation,” he said in our interview. “Everyone will have the same instructional goal — we’re all going to learn to read the fourth-grade text. I might teach a whole-class lesson and then let some kids move on to independent work while others get more help. Maybe the ones who didn’t get it, read the text again with my support. By the end, more students will have reached the learning goal — and tomorrow the whole class can take on another text.”

    27 different ways

    Shanahan’s approach doesn’t mean throwing kids into the deep end without help. His book outlines a toolbox of strategies for tackling difficult texts, such as looking up unfamiliar vocabulary, rereading confusing passages, or breaking down long sentences. “You can tip over into successful reading 27 different ways,” he said, and he hopes future researchers discover many more. 

    He is skeptical of drilling students on skills like identifying the main idea or making inferences. “We’ve treated test questions as the skill,” he said. “That doesn’t work.”

    There is widespread frustration over the deterioration of American reading achievement, especially among middle schoolers. (Thirty-nine percent of eighth graders cannot reach the lowest of three achievement levels, called “basic,” on the National Assessment of Educational Progress.) But there is little agreement among reading advocates on how to fix the problem. Some argue that what children primarily need is more knowledge to grasp unfamiliar ideas in a new reading passage, but Shanahan argues that background knowledge won’t be sufficient or as powerful as explicit comprehension instruction. Other reading experts agree. Nonie Lesaux, dean of the Harvard Graduate School of Education who specializes in literacy in her own academic work, endorsed Shanahan’s argument in an October 2025 online discussion of the new book. 

    Shanahan is most persuasive in pointing out that there isn’t strong experimental evidence to show that reading achievement goes up more when students read a text at their individual level. By contrast, a 2024 analysis found that the most effective schools are those that keep instruction at grade level. Still, Shanahan acknowledges that more research is needed to pinpoint which comprehension strategies work best for which students and in which circumstances.

    Misunderstanding Vygotsky

    Teachers often cite the Russian psychologist Lev Vygotsky’s “zone of proximal development” to justify giving students books that are neither too easy nor too hard. But Shanahan says that’s a misunderstanding of Vygotsky’s work.

    Vygotsky believed teachers should guide students to learn challenging things they cannot yet do on their own, he said.

    He offers an analogy: a mother teaching her child to tie their shoes. At first, she demonstrates while narrating the steps aloud. Then the child does one step, and she finishes the rest. Over time, the mother gradually releases control and the child ties a bow on his own. “Leveled reading,” Shanahan said, “is like saying, ‘Why don’t we just get Velcro?’ This is about real teaching. ‘Boys and girls, you don’t know how to ride this bike yet, but I’m going to make sure you do by the time we’re done.’ ”

    Related: What happens to reading comprehension when students focus on the main idea

    Shanahan’s critique of reading instruction applies mainly from second grade onward, after children learn how to read and are focusing on understanding what they read. In kindergarten and first grade, when children are still learning phonics and how to decode the words on the page, the research evidence against small group instruction with different level texts isn’t as strong, he said. 

    Learning to read first – decoding – is important. Shanahan says there are rare exceptions to teaching all children at grade level. 

    “If a fifth grader still can’t read,” Shanahan said, “I wouldn’t make that child read a fifth-grade text.” That child might need separate instruction from a reading specialist.

    Advanced readers, meanwhile, can be challenged in other ways, Shanahan suggests, through independent reading time, skipping ahead to higher-grade reading classes, or by exploring complex ideas within grade-level texts.

    The role of AI — and parents

    Artificial intelligence is increasingly being used to rewrite texts for different difficulty levels. Shanahan is skeptical of that approach. Simpler texts, whether written by humans or generated by AI, don’t teach students to improve their reading ability, he argues.

    Still, he’s intrigued by the idea of using AI to help students “climb the stairs” by instantly modifying a single text to a range of reading levels, say, to third-, fifth- and seventh-grade levels, and having students read them in quick succession. Whether that boosts comprehension is still unknown and needs to be studied.

    AI might be most helpful to teachers, Shanahan suspects, to help point to a sentence or a passage that tends to confuse students or trip them up. The teacher can then address those common difficulties in class. 

    Shanahan worries about what happens outside of school: Kids aren’t reading much at all.

    He urges parents to let children read whatever they enjoy — regardless if it’s above or below their level — but to set consistent expectations. “Nagging may not be effective,” he said. “But you can be specific: ‘After dinner Thursday, read the first chapter. When you’re done, we’ll talk about it, and then you can play a computer game or go on your phone.’ ”

    Too often, he says, parents back down when kids resist. “They are the kids. We are the adults,” Shanahan said. “We’re responsible. Let’s step up and do what’s right for them.”

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about reading levels was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    This <a target=”_blank” href=”https://hechingerreport.org/proof-points-shanahan-leveled-reading/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113055&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/proof-points-shanahan-leveled-reading/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Cellphone bans can help kids learn — but Black students are suspended more as schools make the shift

    Cellphone bans can help kids learn — but Black students are suspended more as schools make the shift

    Thirty states now limit or ban cellphone use in classrooms, and teachers are noticing children paying attention to their lessons again. But it’s not clear whether this policy — unpopular with students and a headache for teachers to enforce — makes an academic difference. 

    If student achievement goes up after a cellphone ban, it’s tough to know if the ban was the reason. Some other change in math or reading instruction might have caused the improvement. Or maybe the state assessment became easier to pass. Imagine if politicians required all students to wear striped shirts and test scores rose. Few would really think that stripes made kids smarter.

    Two researchers from the University of Rochester and RAND, a nonprofit research organization, figured out a clever way to tackle this question by taking advantage of cellphone activity data in one large school district in Florida, which in 2023 became the first state to institute school cellphone restrictions. The researchers compared schools that had high cellphone activity before the ban with those that had low cellphone usage to see if the ban made a bigger difference for schools that had high usage. 

    Indeed, it did. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Student test scores rose a bit more in high cellphone usage schools two years after the ban compared with schools that had lower cellphone usage to start. Students were also attending school more regularly. 

    The policy also came with a troubling side effect. The cellphone bans led to a significant increase in student suspensions in the first year, especially among Black students. But disciplinary actions declined during the second year. 

    “Cellphone bans are not a silver bullet,” said David Figlio, an economist at the University of Rochester and one of the study’s co-authors. “But they seem to be helping kids. They’re attending school more, and they’re performing a bit better on tests.”

    Figlio said he was “worried” about the short-term 16 percent increase in suspensions for Black students. What’s unclear from this data analysis is whether Black students were more likely to violate the new cellphone rules, or whether teachers were more likely to single out Black students for punishment. It’s also unclear from these administrative behavior records if students were first given warnings or lighter punishments before they were suspended. 

    The data suggest that students adjusted to the new rules. A year later, student suspensions, including those of Black students, fell back to what they had been before the cellphone ban.

    “What we observe is a rocky start,” Figlio added. “There was a lot of discipline.”

    The study, “The Impact of Cellphone Bans in Schools on Student Outcomes: Evidence from Florida,” is a draft working paper and has not been peer-reviewed. It was slated to be circulated by the National Bureau of Economic Research on Oct. 20 and the authors shared a draft with me in advance. Figlio and his co-author Umut Özek at RAND believe it is the first study to show a causal connection between cellphone bans and learning rather than just a correlation.

    The academic gains from the cellphone ban were small, less than a percentile point, on average. That’s the equivalent of moving from the 50th percentile on math and reading tests (in the middle) to the 51st percentile (still close to the middle), and this small gain did not emerge until the second year for most students. The academic benefits were strongest for middle schoolers, white students, Hispanic students and male students. The academic gains for Black students and female students were not statistically significant.  

    Related: Suspended for…what? 

    I was surprised to learn that there is data on student cellphone use in school. The authors of this study used information from Advan Research Corp., which collects and analyzes data from mobile phones around the world for business purposes, such as figuring out how many people visit a particular retail store. The researchers were able to obtain this data for schools in one Florida school district and estimate how many students were on their cellphones before and after the ban went into effect between the hours of 9 a.m. and 1 p.m.

    The data showed that more than 60 percent of middle schoolers, on average, were on their phones at least once during the school day before the 2023 ban in this particular Florida district, which was not named but described as one of the 10 largest districts in the country. (Five of the nation’s 10 largest school districts are in Florida.) After the ban, that fell in half to 30 percent of middle schoolers in the first year and down to 25 percent in the second year.

    Elementary school students were less likely to be on cellphones to start with and their in-school usage fell from about 25 percent of students before the ban to 15 percent after the ban. More than 45 percent of high schoolers were on their phones before the ban and that fell to about 10 percent afterwards.

    Average daily smartphone visits in schools, by year and grade level

    Average daily smartphone visits during regular school days (relative to teacher workdays without students) between 9am and 1pm (per 100 enrolled students) in the two months before and then after the 2023 ban took effect in one large urban Florida school district. Source: Figlio and Özek, October 2025 draft paper, figure 2C, p. 23.

    Florida did not enact a complete cellphone ban in 2023, but imposed severe restrictions. Those restrictions were tightened in 2025 and that additional tightening was not studied in this paper.

    Anti-cellphone policies have become increasingly popular since the pandemic, largely based on our collective adult gut hunches that kids are not learning well when they are consumed by TikTok and SnapChat. 

    This is perhaps a rare case in public policy, Figlio said, where the “data back up the hunches.” 

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about cellphone bans was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Some social emotional lessons improve how kids do at school, Yale study finds

    Some social emotional lessons improve how kids do at school, Yale study finds

    Social emotional learning — lessons in soft skills like listening to people you disagree with or calming yourself down before a test — has become a flashpoint in the culture wars. 

    The conservative political group Moms for Liberty opposes SEL, as it is often abbreviated, telling parents that its “goal is to psychologically manipulate students to accept the progressive ideology that supports gender fluidity, sexual preference exploration, and systemic oppression.” Critics say that parents should discuss social and emotional matters at home and that schools should stick to academics. Meanwhile, some advocates on the left say standard SEL classes don’t go far enough and should include such topics as social justice and anti-racism training. 

    While the political battle rages on, academic researchers are marshalling evidence for what high-quality SEL programs actually deliver for students. The latest study, by researchers at Yale University, summarizes 12 years of evidence, from 2008 to 2020, and it finds that 30 different SEL programs, which put themselves through 40 rigorous evaluations involving almost 34,000 students, tended to produce “moderate” academic benefits.

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    The meta-analysis, published online Oct. 8 in the peer-reviewed journal Review of Educational Research, calculated that the grades and test scores of students in SEL classes improved by about 4 percentile points, on average, compared with students who didn’t receive soft-skill instruction. That’s the equivalent of moving from the 50th percentile (in the middle) to the 54th percentile (slightly above average). Reading gains were larger (more than 6 percentile points) than math gains (fewer than 4 percentile points). Longer-duration SEL programs, extending more than four months, produced double the academic gains — more than 8 percentile points. 

    “Social emotional learning interventions are not designed, most of the time, to explicitly improve academic achievement,” said Christina Cipriano, one of the study’s four authors and an associate professor at Yale Medical School’s Child Study Center. “And yet we demonstrated, through our meta-analytic report, that explicit social emotional learning improved academic achievement and it improved both GPA and test scores.”

    Cipriano also directs the Education Collaboratory at Yale, whose mission is to “advance the science of learning and social and emotional development.”

    The academic boost from SEL in this 2025 paper is much smaller than the 11 percentile points documented in an earlier 2011 meta-analysis that summarized research through 2007, when SEL had not yet gained widespread popularity in schools. That has since changed. More than 80 percent of principals of K-12 schools said their schools used an SEL curriculum during the 2023-24 school year, according to a survey by the Collaborative for Academic, Social, and Emotional Learning (CASEL) and the RAND Corporation. 

    Related: A research update on social-emotional learning in schools

    The Yale researchers only studied a small subset of the SEL market, programs that subjected themselves to a rigorous evaluation and included academic outcomes. Three-quarters of the 40 studies were randomized-controlled trials, similar to pharmaceutical trials, where schools or teachers were randomly assigned to teach an SEL curriculum. The remaining studies, in which schools or teachers volunteered to participate, still had control groups of students so that researchers could compare the academic gains of students who did not receive SEL instruction. 

    The SEL programs in the Yale study taught a wide range of soft skills, from mindfulness and anger management to resolving conflicts and setting goals. It is unclear which soft skills are driving the academic gains. That’s an area for future research.

    “Developmentally, when we think about what we know about how kids learn, emotional regulation is really the driver,” said Cipriano. “No matter how good that curriculum or that math program or reading curriculum is, if a child is feeling unsafe or anxious or stressed out or frustrated or embarrassed, they’re not available to receive the instruction, however great that teacher might be.”

    Cipriano said that effective programs give students tools to cope with stressful situations. She offered the example of a pop quiz, from the perspective of a student. “You can recognize, I’m feeling nervous, my blood is rushing to my hands or my face, and I can use my strategies of counting to 10, thinking about what I know, and use positive self talk to be able to regulate, to be able to take my test,” she said.

    Related: A cheaper, quicker approach to social-emotional learning?

    The strongest evidence for SEL is in elementary school, where the majority of evaluations have been conducted (two-thirds of the 40 studies). For young students, SEL lessons tend to be short but frequent, for example, 10 minutes a day. There’s less evidence for middle and high school SEL programs because they haven’t been studied as much. Typically, preteens and teens have less frequent but longer sessions, a half hour or even 90 minutes, weekly or monthly. 

    Cipriano said that schools don’t need to spend “hours and hours” on social and emotional instruction in order to see academic benefits. A current trend is to incorporate or embed social and emotional learning within academic instruction, as part of math class, for example. But none of the underlying studies in this paper evaluated whether this was a more effective way to deliver SEL. All of the programs in this study were separate stand-alone SEL lessons. 

    Advice to schools

    Schools are inundated by sales pitches from SEL vendors. Estimates of the market size range wildly, but a half dozen market research firms put it above $2 billion annually. Not all SEL programs are necessarily effective or can be expected to produce the academic gains that the Yale team calculated. 

    Cipriano advises schools not to be taken in by slick marketing. Many of the effective programs have no marketing at all and some are free. Unfortunately, some of these programs have been discontinued or have transformed through ownership changes. But she says school leaders can ask questions about which specific skills the SEL program claims to foster, whether those skills will help the district achieve its goals, such as improving school climate, and whether the program has been externally evaluated. 

    “Districts invest in things all the time that are flashy and pretty, across content areas, not just SEL,” said Cipriano. “It may never have had an external evaluation, but has a really great social media presence and really great marketing.” 

    Cipriano has also built a new website, improvingstudentoutcomes.org, to track the latest research on SEL effectiveness and to help schools identify proven programs.

    Cipriano says parents should be asking questions too. “Parents should be partners in learning,” said Cipriano. “I have four kids, and I want to know what they’re learning about in school.”

    This meta-analysis probably won’t stop the SEL critics who say that these programs force educators to be therapists. Groups like Moms for Liberty, which holds its national summit this week, say teachers should stick to academics. This paper rejects that dichotomy because it suggests that emotions, social interaction and academics are all interlinked. 

    Before criticizing all SEL programs, educators and parents need to consider the evidence.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about SEL benefits was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Education Department takes a preliminary step toward revamping its research and statistics arm

    Education Department takes a preliminary step toward revamping its research and statistics arm

    In his first two months in office, President Donald Trump ordered the closing of the Education Department and fired half of its staff. The department’s research and statistics division, called the Institute of Education Sciences (IES), was particularly hard hit. About 90 percent of its staff lost their jobs and more than 100 federal contracts to conduct its primary activities were canceled.

    But now there are signs that the Trump administration is partially reversing course and wants the federal government to retain a role in generating education statistics and evidence for what works in classrooms — at least to some extent. On Sept. 25, the department posted a notice in the Federal Register asking the public to submit feedback by Oct. 15 on reforming IES to make research more relevant to student learning. The department also asked for suggestions on how to collect data more efficiently.

    The timeline for revamping IES remains unclear, as is whether the administration will invest money into modernizing the agency. For example, it would take time and money to pilot new statistical techniques; in the meantime, statisticians would have to continue using current protocols.

    Still, the signs of rebuilding are adding up. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    At the end of May, the department announced that it had temporarily hired a researcher from the Thomas B. Fordham Institute, a conservative think tank, to recommend ways to reform education research and development. The researcher, Amber Northern, has been “listening” to suggestions from think tanks and research organizations, according to department spokeswoman Madi Biedermann, and now wants more public feedback.  

    Biedermann said that the Trump administration “absolutely” intends to retain a role in education research, even as it seeks to close the department. Closure will require congressional approval, which hasn’t happened yet. In the meantime, Biedermann said the department is looking across the government to find where its research and statistics activities “best fit.”

    Other IES activities also appear to be resuming. In June, the department disclosed in a legal filing that it had or has plans to reinstate 20 of the 101 terminated contracts. Among the activities slated to be restarted are 10 Regional Education Laboratories that partner with school districts and states to generate and apply evidence. It remains unclear how all 20 contracts can be restarted without federal employees to hold competitive bidding processes and oversee them. 

    Earlier in September, the department posted eight new jobs to help administer the National Assessment of Educational Progress (NAEP), also called the Nation’s Report Card. These positions would be part of IES’s statistics division, the National Center for Education Statistics. Most of the work in developing and administering tests is handled by outside vendors, but federal employees are needed to award and oversee these contracts. After mass firings in March, employees at the board that oversees NAEP have been on loan to the Education Department to make sure the 2026 NAEP test is on schedule.

    Only a small staff remains at IES. Some education statistics have trickled out since Trump took office, including its first release of higher education data on Sept. 23. But the data releases have been late and incomplete

    It is believed that no new grants have been issued for education studies since March, according to researchers who are familiar with the federal grant making process but asked not to be identified for fear of retaliation. A big obstacle is that a contract to conduct peer review of research proposals was canceled so new ideas cannot be properly vetted. The staff that remains is trying to make annual disbursements for older multi-year studies that haven’t been canceled. 

    Related: Chaos and confusion as the statistics arm of the Education Department is reduced to a skeletal staff of 3

    With all these changes, it’s becoming increasingly difficult to figure out the status of federally funded education research. One potential source of clarity is a new project launched by two researchers from George Washington University and Johns Hopkins University. Rob Olsen and Betsy Wolf, who was an IES researcher until March, are tracking cancellations and keeping a record of research results for policymakers. 

    If it’s successful, it will be a much-needed light through the chaos.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about reforming IES was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • A researcher’s view on using AI to become a better writer

    A researcher’s view on using AI to become a better writer

    Writing can be hard, equal parts heavy lifting and drudgery. No wonder so many students are turning to the time-saving allure of ChatGPT, which can crank out entire papers in seconds. It rescues them from procrastination jams and dreaded all-nighters, magically freeing up more time for other pursuits, like, say … doomscrolling.

    Of course, no one learns to be a better writer when someone else (or some AI bot) is doing the work for them. The question is whether chatbots can morph into decent writing teachers or coaches that students actually want to consult to improve their writing, and not just use for shortcuts.

    Maybe.

    Jennifer Meyer, an assistant professor at the University of Vienna in Austria, has been studying how AI bots can be used to improve student writing for several years. In an interview, she explained why she is cautious about the ability of AI to make us better writers and is still testing how to use the new technology effectively.

    All in the timing 

    Meyer says that just because ChatGPT is available 24/7 doesn’t mean students should consult it at the start of the writing process. Instead, Meyer believes that students would generally learn more if they wrote a first draft on their own. 

    That’s when AI could be most helpful, she thinks. With some prompting, a chatbot could provide immediate writing feedback targeted to each students’ needs. One student might need to practice writing shorter sentences. Another might be struggling with story structure and outlining. AI could theoretically meet an entire classroom’s individual needs faster than a human teacher. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    In Meyer’s experiments, she inserted AI only after the first draft was done as part of the revision process. In a study published in 2024, she randomly assigned 200 German high school students to receive AI feedback after writing a draft of an essay in English. Their revised essays were stronger than those of 250 students who were also told to revise, but didn’t get help from AI. 

    In surveys, those with AI feedback also said they felt more motivated to rewrite than those who didn’t get feedback. That motivation is critical. Often students aren’t in the mood to rewrite, and without revisions, students can’t become better writers.

    Meyer doesn’t consider her experiment proof that AI is a great writing teacher. She didn’t compare it with how student writing improved after human feedback. Her experiment compared only AI feedback with no feedback. 

    Most importantly, one dose of AI writing feedback wasn’t enough to elevate students’ writing skills. On a second, fresh essay topic, the students who had previously received AI feedback didn’t write any better than the students who hadn’t been helped by AI.

    Related: AI writing feedback ‘better than I thought,’ top researcher says

    It’s unclear how many rounds of AI feedback it would take to boost a student’s writing skills more permanently, not just help revise the essay at hand. 

    And Meyer doesn’t know whether a student would want to keep discussing writing with an AI bot over and over again. Maybe students were willing to engage with it in this experiment because it was a novelty, but could soon tire of it. That’s next on Meyer’s research agenda.

    A viral MIT study

    A much smaller MIT study published earlier this year echoes Meyer’s theory. “Your Brain on ChatGPT” went viral because it seemed to say that using ChatGPT to help write an essay made students’ brains less engaged. Researchers found that students who wrote an essay without any online tools had stronger brain connectivity and activity than students who used AI or consulted Google to search for source materials. (Using Google while writing wasn’t nearly as bad for the brain as AI.) 

    Although those results made headlines, there was more to the experiment. The students who initially wrote an essay on their own were later given ChatGPT to help improve their essays. That switch to ChatGPT boosted brain activity, in contrast to what the neuroscientists found during the initial writing process. 

    Related: University students offload critical thinking, other hard work to AI

    These studies add to the evidence that delaying AI a bit, after some initial thinking and drafting, could be a sweet spot in learning. That’s something researchers need to test more. 

    Still, Meyer remains concerned about giving AI tools to very weak writers and to young children who haven’t developed basic writing skills. “This could be a real problem,” said Meyer. “It could be detrimental to use these tools too early.”

    Cheating your way to learning?

    Meyer doesn’t think it’s always a bad idea for students to ask ChatGPT to do the writing for them. 

    Just as young artists learn to paint by copying masterpieces in museums, students might learn to write better by copying good writing. (The late great New Yorker editor John Bennet taught Jill to write this way. He called it “copy work” and he encouraged his journalism students to do it every week by copying longhand the words of legendary writers, not AI.)

    Meyer suggests that students ask ChatGPT to write a sample essay that meets their teacher’s assignment and grading criteria. The next step is key. If students pretend it’s their own piece and submit it, that’s cheating. They’ve also offloaded cognitive work to technology and haven’t learned anything.

    Related: AI essay grading is already as ‘good as an overburdened’ teacher, but researchers say it needs more work

    But the AI essay can be an effective teaching tool, in theory, if students study the arguments, organizational structure, sentence construction and vocabulary before writing a new draft in their own words. Ideally, the next assignment should be better if students have learned through that analysis and internalized the style and techniques of the model essay, Meyer said. 

    “My hypothesis would be as long as there’s cognitive effort with it, as long as there’s a lot of time on task and like critical thinking about the output, then it should be fine,” said Meyer.

    Reconsidering praise

    Everyone likes a compliment. But too much praise can drown learning just as too much water can keep flowers from blooming.  

    ChatGPT has a tendency to pour the praise on thick and often begins with banal flattery, like “Great job!” even when a student’s writing needs a lot of work. In Meyer’s test of whether AI feedback can improve students’ writing, she intentionally told ChatGPT not to start with praise and instead go straight to constructive criticism.

    Her parsimonious approach to praise was inspired by a 2023 writing study about what motivates students to revise. The study found that when teachers started off with general praise, students were left with the false impression that their work was already good enough so they didn’t put in the extra effort to rewrite.

    Related: Asian American students lose more points in an AI essay grading study — but researchers don’t know why

    In Meyer’s experiment, the praise-free feedback was effective in getting students to revise and improve their essays. But she didn’t set up a direct competition between the two approaches — praise-free vs. praise-full — so we don’t know for sure which is more effective when students are interacting with AI.

    Being stingy with praise rubs real teachers the wrong way. After Meyer removed praise from the feedback, teachers told her they wanted to restore it. “They wondered about why the feedback was so negative,” Meyer said. “That’s not how they would do it.”

    Meyer and other researchers may one day solve the puzzle of how to turn AI chatbots into great writing coaches. But whether students will have the willpower or desire to forgo an instantly written essay is another matter. As long as ChatGPT continues to allow students to take the easy way out, it’s human nature to do so. 

    Shirley Liu is a graduate student in education at Northwestern University. Liu reported and wrote this story along with The Hechinger Report’s Jill Barshay.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about using AI to become a better writer was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Tutoring was supposed to save American kids after the pandemic. The results? ‘Sobering’

    Tutoring was supposed to save American kids after the pandemic. The results? ‘Sobering’

    Rigorous research rarely shows that any teaching approach produces large and consistent benefits for students. But tutoring seemed to be a rare exception. Before the pandemic, almost 100 studies pointed to impressive math or reading gains for students who were paired with a tutor at least three times a week and used a proven curriculum or set of lesson plans. 

    Some students gained an extra year’s worth of learning — far greater than the benefit of smaller classes, summer school or a fantastic teacher. These were rigorous randomized controlled trials, akin to the way that drugs or vaccines are tested, comparing test scores of tutored students against those who weren’t. The expense, sometimes surpassing $4,000 a year per student, seemed worth it for what researchers called high-dosage tutoring.

    On the strength of that evidence, the Biden administration urged schools to invest their pandemic recovery funds in intensive tutoring to help students catch up academically. Forty-six percent of public schools heeded that call, according to a 2024 federal survey, though it’s unclear exactly how much of the $190 billion in pandemic recovery funds have been spent on high-dosage tutoring and how many students received it. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Even with ample money, schools immediately reported problems in ramping up high-quality tutoring for so many students. In 2024, researchers documented either tiny or no academic benefits from large-scale tutoring efforts in Nashville, Tennessee, and Washington, D.C.

    New evidence from the 2023-24 school year reinforces those results. Researchers are rigorously studying large-scale tutoring efforts around the nation and testing whether effective tutoring can be done more cheaply. A dozen researchers studied more than 20,000 students in Miami; Chicago; Atlanta; Winston-Salem and Greensboro, North Carolina; Greenville, South Carolina; schools throughout New Mexico, and a California charter school network. This was also a randomized controlled study in which 9,000 students were randomly assigned to get tutoring and compared with 11,000 students who didn’t get that extra help.

    Their preliminary results were “sobering,” according to a June report by the University of Chicago Education Lab and MDRC, a research organization.

    The researchers found that tutoring during the 2023-24 school year produced only one or two months’ worth of extra learning in reading or math — a tiny fraction of what the pre-pandemic research had produced. Each minute of tutoring that students received appeared to be as effective as in the pre-pandemic research, but students weren’t getting enough minutes of tutoring altogether. “Overall we still see that the dosage students are getting falls far short of what would be needed to fully realize the promise of high-dosage tutoring,” the report said.

    Monica Bhatt, a researcher at the University of Chicago Education Lab and one of the report’s authors, said schools struggled to set up large tutoring programs. “The problem is the logistics of getting it delivered,” said Bhatt. Effective high-dosage tutoring involves big changes to bell schedules and classroom space, along with the challenge of hiring and training tutors. Educators need to make it a priority for it to happen, Bhatt said.

    Related: Students aren’t benefiting much from tutoring, one new study shows

    Some of the earlier, pre-pandemic tutoring studies involved large numbers of students, too, but those tutoring programs were carefully designed and implemented, often with researchers involved. In most cases, they were ideal setups. There was much greater variability in the quality of post-pandemic programs.

    “For those of us that run experiments, one of the deep sources of frustration is that what you end up with is not what you tested and wanted to see,” said Philip Oreopoulos, an economist at the University of Toronto, whose 2020 review of tutoring evidence influenced policymakers. Oreopoulos was also an author of the June report.

    “After you spend lots of people’s money and lots of time and effort, things don’t always go the way you hope. There’s a lot of fires to put out at the beginning or throughout because teachers or tutors aren’t doing what you want, or the hiring isn’t going well,” Oreopoulos said.

    Another reason for the lackluster results could be that schools offered a lot of extra help to everyone after the pandemic, even to students who didn’t receive tutoring. In the pre-pandemic research, students in the “business as usual” control group often received no extra help at all, making the difference between tutoring and no tutoring far more stark. After the pandemic, students — tutored and non-tutored alike — had extra math and reading periods, sometimes called “labs” for review and practice work. More than three-quarters of the 20,000 students in this June analysis had access to computer-assisted instruction in math or reading, possibly muting the effects of tutoring.

    Related: Tutoring may not significantly improve attendance

    The report did find that cheaper tutoring programs appeared to be just as effective (or ineffective) as the more expensive ones, an indication that the cheaper models are worth further testing. The cheaper models averaged $1,200 per student and had tutors working with eight students at a time, similar to small group instruction, often combining online practice work with human attention. The more expensive models averaged $2,000 per student and had tutors working with three to four students at once. By contrast, many of the pre-pandemic tutoring programs involved smaller 1-to-1 or 2-to-1 student-to-tutor ratios.

    Despite the disappointing results, researchers said that educators shouldn’t give up. “High-dosage tutoring is still a district or state’s best bet to improve student learning, given that the learning impact per minute of tutoring is largely robust,” the report concludes. The task now is to figure out how to improve implementation and increase the hours that students are receiving. “Our recommendation for the field is to focus on increasing dosage — and, thereby learning gains,” Bhatt said.

    That doesn’t mean that schools need to invest more in tutoring and saturate schools with effective tutors. That’s not realistic with the end of federal pandemic recovery funds.  

    Instead of tutoring for the masses, Bhatt said researchers are turning their attention to targeting a limited amount of tutoring to the right students. “We are focused on understanding which tutoring models work for which kinds of students.” 

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about tutoring effectiveness was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Experts knock new Trump plan to collect college admissions data

    Experts knock new Trump plan to collect college admissions data

    President Donald Trump wants to collect more admissions data from colleges and universities to make sure they’re complying with a 2023 Supreme Court decision that ended race-conscious affirmative action. And he wants that data now. 

    But data experts and higher education scholars warn that any new admissions data is likely to be inaccurate, impossible to interpret and ultimately misused by policymakers. That’s because Trump’s own policies have left the statistics agency inside the Education Department with a skeleton staff and not enough money, expertise or time to create this new dataset. 

    The department already collects data on enrollment from every institution of higher education that participates in the federal student loan program. The results are reported through the Integrated Postsecondary Education Data System (IPEDS). But in an Aug. 7 memorandum, Trump directed the Education Department, which he sought to close in March, to expand that task and provide “transparency” into how some 1,700 colleges that do not admit everyone are making their admissions decisions. And he gave Education Secretary Linda McMahon just 120 days to get it done. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Expanding data collection on applicants is not a new idea. The Biden administration had already ordered colleges to start reporting race and ethnicity data to the department this fall in order to track changes in diversity in postsecondary education. But in a separate memorandum to the head of the National Center for Education Statistics (NCES), McMahon asked for even more information, including high school grades and college entrance exam scores, all broken down by race and gender.  

    Bryan Cook, director of higher education policy at the Urban Institute, a think tank in Washington, D.C., called the 120-day timeline “preposterous” because of the enormous technical challenges. For example, IPEDS has never collected high school GPAs. Some schools use a weighted 5.0 scale, giving extra points for advanced classes, and others use an unweighted 4.0 scale, which makes comparisons messy. Other issues are equally thorny. Many schools no longer require applicants to report standardized test scores and some no longer ask them about race so the data that Trump wants doesn’t exist for those colleges. 

    “You’ve got this effort to add these elements without a mechanism with which to vet the new variables, as well as a system for ensuring their proper implementation,” said Cook. “You would almost think that whoever implemented this didn’t know what they were doing.” 

    Cook has helped advise the Education Department on the IPEDS data collection for 20 years and served on technical review panels, which are normally convened first to recommend changes to the data collection. Those panels were disbanded earlier this year, and there isn’t one set up to vet Trump’s new admissions data proposal.

    Cook and other data experts can’t figure out how a decimated education statistics agency could take on this task. All six NCES employees who were involved in IPEDS data collection were fired in March, and there are only three employees left out of 100 at NCES, which is run by an acting commissioner who also has several other jobs. 

    An Education Department official, who did not want to be named, denied that no one left inside the Education Department has IPEDS experience. The official said that staff inside the office of the chief data officer, which is separate from the statistics agency, have a “deep familiarity with IPEDS data, its collection and use.” Former Education Department employees told me that some of these employees have experience in analyzing the data, but not in collecting it.

    In the past, there were as many as a dozen employees who worked closely with RTI International, a scientific research institute, which handles most of the IPEDS data collection work. 

    Technical review eliminated

    Of particular concern is that RTI’s $10 million annual contract to conduct the data collection had been slashed approximately in half by the Department of Government Efficiency, also known as DOGE, according to two former employees, who asked to remain anonymous out of fear of retaliation. Those severe budget cuts eliminated the technical review panels that vet proposed changes to IPEDS, and ended training for colleges and universities to submit data properly, which helped with data quality. RTI did not respond to my request to confirm the cuts or answer questions about the challenges it will face in expanding its work on a reduced budget and staffing.

    The Education Department did not deny that the IPEDS budget had been cut in half. “The RTI contract is focused on the most mission-critical IPEDS activities,” the Education Department official said. “The contract continues to include at least one task under which a technical review panel can be convened.”  

    Additional elements of the IPEDS data collection have also been reduced, including a contract to check data quality.

    Last week, the scope of the new task became more apparent. On Aug. 13, the administration released more details about the new admissions data it wants, describing how the Education Department is attempting to add a whole new survey to IPEDS, called the Admissions and Consumer Transparency Supplement (ACTS), which will disaggregate all admissions data and most student outcome and financial aid data by race and gender. College will have to report on both undergraduate and graduate school admissions. The public has 60 days to comment, and the administration wants colleges to start reporting this data this fall. 

    Complex collection

    Christine Keller, executive director of the Association for Institutional Research, a trade group of higher education officials who collect and analyze data, called the new survey “one of the most complex IPEDS collections ever attempted.” 

    Traditionally, it has taken years to make much smaller changes to IPEDS, and universities are given a year to start collecting the new data before they are required to submit it. (Roughly 6,000 colleges, universities and vocational schools are required to submit data to IPEDS as a condition for their students to take out federal student loans or receive federal Pell Grants. Failure to comply results in fines and the threat of losing access to federal student aid.)

    Normally, the Education Department would reveal screenshots of data fields, showing what colleges would need to enter into the IPEDS computer system. But the department has not done that, and several of the data descriptions are ambiguous. For example, colleges will have to report test scores and GPA by quintile, broken down by race and ethnicity and gender. One interpretation is that a college would have to say how many Black male applicants, for example, scored above the 80th percentile on the SAT or the ACT. Another interpretation is that colleges would need to report the average SAT or ACT score of the top 20 percent of Black male applicants. 

    The Association for Institutional Research used to train college administrators on how to collect and submit data correctly and sort through confusing details — until DOGE eliminated that training. “The absence of comprehensive, federally funded training will only increase institutional burden and risk to data quality,” Keller said. Keller’s organization is now dipping into its own budget to offer a small amount of free IPEDS training to universities

    The Education Department is also requiring colleges to report five years of historical admissions data, broken down into numerous subcategories. Institutions have never been asked to keep data on applicants who didn’t enroll. 

    “It’s incredible they’re asking for five years of prior data,” said Jordan Matsudaira, an economist at American University who worked on education policy in the Biden and Obama administrations. “That will be square in the pandemic years when no one was reporting test scores.”

    ‘Misleading results’

    Matsudaira explained that IPEDS had considered asking colleges for more academic data by race and ethnicity in the past and the Education Department ultimately rejected the proposal. One concern is that slicing and dicing the data into smaller and smaller buckets would mean that there would be too few students and the data would have to be suppressed to protect student privacy. For example, if there were two Native American men in the top 20 percent of SAT scores at one college, many people might be able to guess who they were. And a large amount of suppressed data would make the whole collection less useful.

    Also, small numbers can lead to wacky results. For example, a small college could have only two Hispanic male applicants with very high SAT scores. If both were accepted, that’s a 100 percent admittance rate. If only 200 white women out of 400 with the same test scores were accepted, that would be only a 50 percent admittance rate. On the surface, that can look like both racial and gender discrimination. But it could have been a fluke. Perhaps both of those Hispanic men were athletes and musicians. The following year, the school might reject two different Hispanic male applicants with high test scores but without such impressive extracurriculars. The admissions rate for Hispanic males with high test scores would drop to zero. “You end up with misleading results,” said Matsudaira. 

    Reporting average test scores by race is another big worry. “It feels like a trap to me,” said Matsudaira. “That is mechanically going to give the administration the pretense of claiming that there’s lower standards of admission for Black students relative to white students when you know that’s not at all a correct inference.”

    The statistical issue is that there are more Asian and white students at the very high end of the SAT score distribution, and all those perfect 1600s will pull the average up for these racial groups. (Just like a very tall person will skew the average height of a group.) Even if a college has a high test score threshold that it applies to all racial groups and no one below a 1400 is admitted, the average SAT score for Black students will still be lower than that of white students. (See graphic below.) The only way to avoid this is to purely admit by test score and take only the students with the highest scores. At some highly selective universities, there are enough applicants with a 1600 SAT to fill the entire class. But no institution fills its student body by test scores alone. That could mean overlooking applicants with the potential to be concert pianists, star soccer players or great writers.

    The Average Score Trap

    This graphic by Kirabo Jackson, an economist at Northwestern University, depicts the problem of measuring racial discrimination though average test scores. Even for a university that admits all students above a certain cut score, the average score of one racial group (red) will be higher than the average score of the other group (blue). Source: graphic posted on Bluesky Social by Josh Goodman

    Admissions data is a highly charged political issue. The Biden administration originally spearheaded the collection of college admissions data by race and ethnicity. Democrats wanted to collect this data to show how the nation’s colleges and universities were becoming less diverse with the end of affirmative action. This data is slated to start this fall, following a full technical and procedural review. 

    Now the Trump administration is demanding what was already in the works, and adding a host of new data requirements — without following normal processes. And instead of tracking the declining diversity in higher education, Trump wants to use admissions data to threaten colleges and universities. If the new directive produces bad data that is easy to misinterpret, he may get his wish.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about college admissions data was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link