Tag: Testing

  • English Teachers Work to Instill the Joy of Reading. Testing Gets in the Way – The 74

    English Teachers Work to Instill the Joy of Reading. Testing Gets in the Way – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    A new national study shows that Americans’ rates of reading for pleasure have declined radically over the first quarter of this century and that recreational reading can be linked to school achievement, career compensation and growth, civic engagement, and health. Learning how to enjoy reading – not literacy proficiency – isn’t just for hobbyists, it’s a necessary life skill. 

    But the conditions under which English teachers work are detrimental to the cause – and while book bans are in the news, the top-down pressure to measure up on test scores is a more pervasive, more longstanding culprit. Last year, we asked high school English teachers to describe their literature curriculum in a national questionnaire we plan to publish soon. From responses representing 48 states, we heard a lot of the following: “soul-deadening”; “only that which students will see on the test” and “too [determined] by test scores.”

    These sentiments certainly aren’t new. In a similar questionnaire distributed in 1911, teachers described English class as “deadening,” focused on “memory instead of thinking,” and demanding “cramming for examination.” 

    Teaching to the test is as old as English itself – as a secondary school subject, that is. Teachers have questioned the premise for just as long because too many have experienced a radical disconnect between how they are asked or required to teach and the pleasure that reading brings them.

    High school English was first established as a test-driven subject around the turn of the 20th Century. Even at a time when relatively few Americans attended college, English class was oriented around building students’ mastery of now-obscure literary works that they would encounter on the College Entrance Exam. 

    The development of the Scholastic Aptitude Test in 1926 and the growth of standardized testing since No Child Left Behind have only solidified what was always true: As much as we think of reading as a social, cultural, even “spiritual” experience, English class has been shaped by credential culture.

    Throughout, many teachers felt that preparing students for college was too limited a goal; their mission was to prepare students for life. They believed that studying literature was an invaluable source of social and emotional development, preparing adolescents for adulthood and for citizenship. It provided them with “vicarious experience”: Through reading, young people saw other points of view, worked through challenging problems, and grappled with complex issues. 

    Indeed, a national study conducted in 1933 asked teachers to rank their “aims” in literature instruction. They listed “vicarious experience” first, “preparation for college” last.

    The results might not look that different today. Ask an English teacher what brought her to the profession, and a love of reading is likely to top the list. What is different today is the  unmatched pressure to prepare students for a constant cycle of state and national examinations and for college credentialing. 

    Increasingly, English teachers are compelled to use online curriculum packages that mimic the examinations themselves, composed largely of excerpts from literary and “informational” texts instead of the whole books that were more the norm in previous generations. “Vicarious experience” has less purchase in contemporary academic standards than ever. 

    Credentialing, however, does not equal preparing. Very few higher education skills map neatly onto standardized exams, especially in the humanities. As English professors, we can tell you that an enjoyment of reading – not just a toleration of it – is a key academic capacity. It produces better writers, more creative thinkers, and students less likely to need AI to express their ideas effectively.

    Yet we haven’t given K-12 teachers the structure or freedom to treat reading enjoyment as a skill. The data from our national survey suggests that English teachers and their students find the system deflating. 

     “Our district adopted a disjointed, excerpt-heavy curriculum two years ago,” a Washington teacher shared, “and it is doing real damage to students’ interest in reading.” 

    From Tennessee, a teacher added: “I understand there are state guidelines and protocols, but it seems as if we are teaching the children from a script. They are willing to be more engaged and can have a better understanding when we can teach them things that are relatable to them.”

    And from Oregon, another tells us that because “state testing is strictly excerpts,” the district initially discouraged “teaching whole novels.”  It changed course only after students’ exam scores improved. 

    Withholding books from students is especially inhumane when we consider that the best tool for improved academic performance is engagement – students learn more when they become engrossed in stories. Yet by the time they graduate from high school, many students  master test-taking skills but lose the window for learning to enjoy reading.

    Teachers tell us that the problem is not attitudinal but structural. An education technocracy that consists of test making agencies, curriculum providers, and policy makers is squeezing out enjoyment, teacher autonomy and student agency. 

    To reverse this trend, we must consider what reading experiences we are providing our students. Instead of the self-defeating cycle of test-preparation and testing, we should take courage, loosen the grip on standardization, and let teachers recreate the sort of experiences with literature that once made us, and them, into readers.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • If we are going to build AI literacy into every level of learning, we must be able to measure it

    If we are going to build AI literacy into every level of learning, we must be able to measure it

    Everywhere you look, someone is telling students and workers to “learn AI.” 

    It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly. 

    And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.  

    Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice. 

    Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications. 

    Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan. 

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

    What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely. 

    Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.  

    One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality. 

    But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable? 

    AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills. 

    Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment. 

    This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.  

    Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback. 

    Related: To employers,AIskills aren’t just for tech majors anymore 

    What can education leaders do right now to address this issue? I have a few ideas.  

    First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.  

    Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource. 

    Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs. 

    To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.  

    And that’s too shaky a foundation for the stakes ahead. 

    Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world. 

    Contact the opinion editor at [email protected]. 

    This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • A gender gap in STEM widened during the pandemic. Schools are trying to make up lost ground

    A gender gap in STEM widened during the pandemic. Schools are trying to make up lost ground

    IRVING, Texas — Crowded around a workshop table, four girls at de Zavala Middle School puzzled over a Lego machine they had built. As they flashed a purple card in front of a light sensor, nothing happened. 

    The teacher at the Dallas-area school had emphasized that in the building process, there are no such thing as mistakes. Only iterations. So the girls dug back into the box of blocks and pulled out an orange card. They held it over the sensor and the machine kicked into motion. 

    “Oh! Oh, it reacts differently to different colors,” said sixth grader Sofia Cruz.

    In de Zavala’s first year as a choice school focused on science, technology, engineering and math, the school recruited a sixth grade class that’s half girls. School leaders are hoping the girls will stick with STEM fields. In de Zavala’s higher grades — whose students joined before it was a STEM school — some elective STEM classes have just one girl enrolled. 

    Efforts to close the gap between boys and girls in STEM classes are picking up after losing steam nationwide during the chaos of the Covid pandemic. Schools have extensive work ahead to make up for the ground girls lost, in both interest and performance.

    In the years leading up to the pandemic, the gender gap nearly closed. But within a few years, girls lost all the ground they had gained in math test scores over the previous decade, according to an Associated Press analysis. While boys’ scores also suffered during Covid, they have recovered faster than girls, widening the gender gap.

    As learning went online, special programs to engage girls lapsed — and schools were slow to restart them. Zoom school also emphasized rote learning, a technique based on repetition that some experts believe may favor boys, instead of teaching students to solve problems in different ways, which may benefit girls. 

    Old practices and biases likely reemerged during the pandemic, said Michelle Stie, a vice president at the National Math and Science Initiative.

    “Let’s just call it what it is,” Stie said. “When society is disrupted, you fall back into bad patterns.”

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    In most school districts in the 2008-09 school year, boys had higher average math scores on standardized tests than girls, according to AP’s analysis, which looked at scores across 15 years in over 5,000 school districts. It was based on average test scores for third through eighth graders in 33 states, compiled by the Educational Opportunity Project at Stanford University. 

    A decade later, girls had not only caught up, they were ahead: Slightly more than half of districts had higher math averages for girls.

    Within a few years of the pandemic, the parity disappeared. In 2023-24, boys on average outscored girls in math in nearly 9 out of 10 districts.

    A separate study by NWEA, an education research company, found gaps between boys and girls in science and math on national assessments went from being practically non-existent in 2019 to favoring boys around 2022.

    Studies have indicated girls reported higher levels of anxiety and depression during the pandemic, plus more caretaking burdens than boys, but the dip in academic performance did not appear outside STEM. Girls outperformed boys in reading in nearly every district nationwide before the pandemic and continued to do so afterward.

    “It wasn’t something like Covid happened and girls just fell apart,” said Megan Kuhfeld, one of the authors of the NWEA study. 

    Related: These districts are bucking the national math slump 

    In the years leading up to the pandemic, teaching practices shifted to deemphasize speed, competition and rote memorization. Through new curriculum standards, schools moved toward research-backed methods that emphasized how to think flexibly to solve problems and how to tackle numeric problems conceptually.

    Educators also promoted participation in STEM subjects and programs that boosted girls’ confidence, including extracurriculars that emphasized hands-on learning and connected abstract concepts to real-life applications. 

    When STEM courses had large male enrollment, Superintendent Kenny Rodrequez noticed girls losing interest as boys dominated classroom discussions at his schools in Grandview C-4 District outside Kansas City. Girls were significantly more engaged after the district moved some of its introductory hands-on STEM curriculum to the lower grade levels and balanced classes by gender, he said.

    When schools closed for the pandemic, the district had to focus on making remote learning work. When in-person classes resumed, some of the teachers had left, and new ones had to be trained in the curriculum, Rodrequez said. 

    “Whenever there’s crisis, we go back to what we knew,” Rodrequez said. 

    Related: One state tried algebra for all eighth graders. It hasn’t gone well

    Despite shifts in societal perceptions, a bias against girls persists in science and math subjects, according to teachers, administrators and advocates. It becomes a message girls can internalize about their own abilities, they say, even at a very young age. 

    In his third grade classroom in Washington, D.C., teacher Raphael Bonhomme starts the year with an exercise where students break down what makes up their identity. Rarely do the girls describe themselves as good at math. Already, some say they are “not a math person.” 

    “I’m like, you’re 8 years old,” he said. “What are you talking about, ‘I’m not a math person?’” 

    Girls also may have been more sensitive to changes in instructional methods spurred by the pandemic, said Janine Remillard, a math education professor at the University of Pennsylvania. Research has found girls tend to prefer learning things that are connected to real-life examples, while boys generally do better in a competitive environment. 

    “What teachers told me during Covid is the first thing to go were all of these sense-making processes,” she said. 

    Related: OPINION: Everyone can be a math person but first we have to make math instruction more inclusive 

    At de Zavala Middle School in Irving, the STEM program is part of a push that aims to build curiosity, resilience and problem-solving across subjects.

    Coming out of the pandemic, Irving schools had to make a renewed investment in training for teachers, said Erin O’Connor, a STEM and innovation specialist there.

    The district last year also piloted a new science curriculum from Lego Education. The lesson involving the machine at de Zavala, for example, had students learn about kinetic energy. Fifth graders learned about genetics by building dinosaurs and their offspring with Lego blocks, identifying shared traits. 

    “It is just rebuilding the culture of, we want to build critical thinkers and problem solvers,” O’Connor said.

    Teacher Tenisha Willis recently led second graders at Irving’s Townley Elementary School through building a machine that would push blocks into a container. She knelt next to three girls who were struggling.

    They tried to add a plank to the wheeled body of the machine, but the blocks didn’t move enough. One girl grew frustrated, but Willis was patient. She asked what else they could try, whether they could flip some parts around. The girls ran the machine again. This time, it worked.

    “Sometimes we can’t give up,” Willis said. “Sometimes we already have a solution. We just have to adjust it a little bit.” 

    Lurye reported from Philadelphia. Todd Feathers contributed reporting from New York. 

    The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Nation’s Report Card at risk, researchers say

    Nation’s Report Card at risk, researchers say

    This story was reported by and originally published by APM Reports in connection with its podcast Sold a Story: How Teach Kids to Read Went So Wrong.

    When voters elected Donald Trump in November, most people who worked at the U.S. Department of Education weren’t scared for their jobs. They had been through a Trump presidency before, and they hadn’t seen big changes in their department then. They saw their work as essential, mandated by law, nonpartisan and, as a result, insulated from politics.

    Then, in early February, the Department of Government Efficiency showed up. Led at the time by billionaire CEO Elon Musk, and known by the cheeky acronym DOGE, it gutted the Department of Education’s Institute of Education Sciences, posting on X that the effort would ferret out “waste, fraud and abuse.”

    A post from the Department of Government Efficiency.

    When it was done, DOGE had cut approximately $900 million in research contracts and more than 90 percent of the institute’s workforce had been laid off. (The current value of the contracts was closer to $820 million, data compiled by APM Reports shows, and the actual savings to the government was substantially less, because in some cases large amounts of money had been spent already.)

    Among staff cast aside were those who worked on the National Assessment of Educational Progress — also known as the Nation’s Report Card — which is one of the few federal education initiatives the Trump administration says it sees as valuable and wants to preserve.

    The assessment is a series of tests administered nearly every year to a national sample of more than 10,000 students in grades 4, 8 and 12. The tests regularly measure what students across the country know in reading, math and other subjects. They allow the government to track how well America’s students are learning overall. Researchers can also combine the national data with the results of tests administered by states to draw comparisons between schools and districts in different states.

    The assessment is “something we absolutely need to keep,” Education Secretary Linda McMahon said at an education and technology summit in San Diego earlier this year. “If we don’t, states can be a little manipulative with their own results and their own testing. I think it’s a way that we keep everybody honest.”

    But researchers and former Department of Education employees say they worry that the test will become less and less reliable over time, because the deep cuts will cause its quality to slip — and some already see signs of trouble.

    “The main indication is that there just aren’t the staff,” said Sean Reardon, a Stanford University professor who uses the testing data to research gaps in learning between students of different income levels.

    All but one of the experts who make sure the questions in the assessment are fair and accurate — called psychometricians — have been laid off from the National Center for Education Statistics. These specialists play a key role in updating the test and making sure it accurately measures what students know.

    “These are extremely sophisticated test assessments that required a team of researchers to make them as good as they are,” said Mark Seidenberg, a researcher known for his significant contributions to the science of reading. Seidenberg added that “a half-baked” assessment would undermine public confidence in the results, which he described as “essentially another way of killing” the assessment.

    The Department of Education defended its management of the assessment in an email: “Every member of the team is working toward the same goal of maintaining NAEP’s gold-standard status,” it read in part.

    The National Assessment Governing Board, which sets policies for the national test, said in a statement that it had temporarily assigned “five staff members who have appropriate technical expertise (in psychometrics, assessment operations, and statistics) and federal contract management experience” to work at the National Center for Education Statistics. No one from DOGE responded to a request for comment.

    Harvard education professor Andrew Ho, a former member of the governing board, said the remaining staff are capable, but he’s concerned that there aren’t enough of them to prevent errors.

    “In order to put a good product up, you need a certain number of person-hours, and a certain amount of continuity and experience doing exactly this kind of job, and that’s what we lost,” Ho said.

    The Trump administration has already delayed the release of some testing data following the cutbacks. The Department of Education had previously planned to announce the results of the tests for 8th grade science, 12th grade math and 12th grade reading this summer; now that won’t happen until September. The board voted earlier this year to eliminate more than a dozen tests over the next seven years, including fourth grade science in 2028 and U.S. history for 12th graders in 2030. The governing board has also asked Congress to postpone the 2028 tests to 2029, citing a desire to avoid releasing test results in an election year. 

    “Today’s actions reflect what assessments the Governing Board believes are most valuable to stakeholders and can be best assessed by NAEP at this time, given the imperative for cost efficiencies,” board chair and former North Carolina Gov. Bev Perdue said earlier this year in a press release.

    The National Assessment Governing Board canceled more than a dozen tests when it revised the schedule for the National Assessment of Educational Progress in April. This annotated version of the previous schedule, adopted in 2023, shows which tests were canceled. Topics shown in all caps were scheduled for a potential overhaul; those annotated with a red star are no longer scheduled for such a revision.

    Recent estimates peg the annual cost to keep the national assessment running at about $190 million per year, a fraction of the department’s 2025 budget of approximately $195 billion.

    Adam Gamoran, president of the William T. Grant Foundation, said multiple contracts with private firms — overseen by Department of Education staff with “substantial expertise” — are the backbone of the national test.

    “You need a staff,” said Gamoran, who was nominated last year to lead the Institute of Education Sciences. He was never confirmed by the Senate. “The fact that NCES now only has three employees indicates that they can’t possibly implement NAEP at a high level of quality, because they lack the in-house expertise to oversee that work. So that is deeply troubling.”

    The cutbacks were widespread — and far outside of what most former employees had expected under the new administration.

    “I don’t think any of us imagined this in our worst nightmares,” said a former Education Department employee, who spoke on condition of anonymity for fear of retaliation by the Trump administration. “We weren’t concerned about the utter destruction of this national resource of data.”

    “At what point does it break?” the former employee asked.

    Related: Suddenly sacked

    Every state has its own test for reading, math and other subjects. But state tests vary in difficulty and content, which makes it tricky to compare results in Minnesota to Mississippi or Montana.

    “They’re totally different tests with different scales,” Reardon said. “So NAEP is the Rosetta stone that lets them all be connected.”

    Reardon and his team at Stanford used statistical techniques to combine the federal assessment results with state test scores and other data sets to create the Educational Opportunity Project. The project, first released in 2016 and updated periodically in the years that followed, shows which schools and districts are getting the best results — especially for kids from poor families. Since the project’s release, Reardon said, the data has been downloaded 50,000 times and is used by researchers, teachers, parents, school boards and state education leaders to inform their decisions.

    For instance, the U.S. military used the data to measure school quality when weighing base closures, and superintendents used it to find demographically similar but higher-performing districts to learn from, Reardon said.

    If the quality of the data slips, those comparisons will be more difficult to make.

    “My worry is we just have less-good information on which to base educational decisions at the district, state and school level,” Reardon said. “We would be in the position of trying to improve the education system with no information. Sort of like, ‘Well, let’s hope this works. We won’t know, but it sounds like a good idea.’”

    Seidenberg, the reading researcher, said the national assessment “provided extraordinarily important, reliable information about how we’re doing in terms of teaching kids to read and how literacy is faring in the culture at large.”

    Producing a test without keeping the quality up, Seidenberg said, “would be almost as bad as not collecting the data at all.”

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.



    Source link

  • American Lung Association urges school radon testing

    American Lung Association urges school radon testing

    This audio is auto-generated. Please let us know if you have feedback.

    The American Lung Association is urging K-12 schools to prioritize indoor air quality and to test for radon, the second-leading cause of lung cancer in the U.S.

    The naturally occurring, odorless, tasteless and colorless radioactive gas can accumulate indoors, entering through cracks in floors, walls and foundations. The only way to determine if a facility has elevated radon levels is through testing, according to the organization. “There is no known safe level of radon exposure,” it says. 

    “Radon … can accumulate inside schools without anyone knowing,” Harold Wimmer, president and CEO of the American Lung Association, said in a statement. “The good news is that testing for radon is simple and affordable — and schools can take action to fix the problem if levels are high.” 

    Young children are especially vulnerable to indoor air pollutants like radon because they spend more time indoors and breathe more air relative to their body size than adults, according to a working paper by the Center on the Developing Child at Harvard University. 

    ALA recommends short-term, charcoal-based radon test kits. In its announcement, it shares two national standards facility managers can follow: 

    • The Radon Mitigation Standards for Schools and Large Buildings (RMS-LB 2018), released jointly by the American National Standards Institute and the American Association of Radon Scientists and Technologists. The standards address specialized techniques and quality assurance processes to mitigate radon in buildings with complicated designs and specialized airflow, which is typical of schools. 
    • The Radon in Schools standards, developed by the U.S. Environmental Protection Agency, recommend that building operators take action if radon levels are at 4.0 picocuries per liter or higher and consider taking action if levels are as low as 2.0 pCi/L. 

    ALA also recommends a school radon testing guide the Minnesota Department of Health developed. 

    HVAC status

    To assess radon levels during normal conditions, testing must take place while the building’s HVAC system is running, the ALA says in a fact sheet. For the most accurate test results, HVAC maintenance and filter changes must be current, it says. 

    If testing finds radon levels under 4.0 pCi/L, schools don’t need to test again for five years, according to the ALA fact sheet. But changes that affect the school HVAC system or changes to the building foundation or the surrounding soil could warrant sooner testing because those events can affect radon levels, the organization says.  

    Many states offer training for school facility managers on how to conduct radon testing, or schools can hire licensed professionals to conduct the tests, according to National Radon Proficiency Program information. 

    The EPA requires states that are receiving indoor radon grants to maintain and provide the public with a list of radon testing service providers credentialed through their own state programs or through two national radon proficiency programs.

    Source link

  • Eliminating Testing Requirements Can Boost Student Diversity

    Eliminating Testing Requirements Can Boost Student Diversity

    The percentage of underrepresented minority students increased in some cases after universities stopped requiring applicants to submit standardized test scores, according to a study published Monday in the American Sociological Review

    The findings come in the aftermath of the COVID-19 pandemic, which prompted many colleges and universities to rethink their testing policies; some went test-optional or test-blind while others doubled down. But starting long before the pandemic, critics have argued that consideration of standardized test scores often advantages white and wealthier applicants. 

    The study examined admissions patterns at 1,528 colleges between 2003 and 2019. During the 16-year time frame, 217 of those colleges (14.2 percent) eliminated standardized testing requirements. But researchers found that simply eliminating testing requirements didn’t guarantee a more diverse student body.  

    The institutions that eliminated the requirements but still gave significant weight to test scores during the application process didn’t increase their enrollment of underrepresented students in the three years after the change. However, colleges that reduced the weight of test scores showed a 2 percent increase in underrepresented student enrollment. 

    Additionally, researchers found that increases in minority student representation were less likely at test-optional colleges that were also dealing with financial or enrollment-related pressures. 

    Greta Hsu, co-author of the paper and a professor at the University of California, Davis, Graduate School of Management, said in a news release that “although test-optional admissions policies are often adopted with the assumption that they will broaden access to underrepresented minority groups,” their effectiveness depends “on existing admissions values and institutional priorities at the university.”

    Source link

  • If we are serious about improving student outcomes, we can’t treat teacher retention as an afterthought

    If we are serious about improving student outcomes, we can’t treat teacher retention as an afterthought

    In the race to help students recover from pandemic-related learning loss, education leaders have overlooked one of the most powerful tools already at their disposal: experienced teachers.

    For decades, a myth has persisted in education policy circles that after their first few years on the job, teachers stop improving. This belief has undercut efforts to retain seasoned educators, with many policymakers and administrators treating veteran teachers as replaceable cogs rather than irreplaceable assets.

    But that myth doesn’t hold up. The evidence tells a different story: Teachers don’t hit a plateau after year five. While their growth may slow, it doesn’t stop. In the right environments — with collaborative colleagues, supportive administrators and stable classroom assignments — teachers can keep getting better well into their second decade in the classroom.

    This insight couldn’t come at a more critical time. As schools work to accelerate post-pandemic learning recovery, especially for the most vulnerable students, they need all the instructional expertise they can muster.

    That means not just recruiting new teachers but keeping their best educators in the classroom and giving them the support they need to thrive.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    In a new review of 23 longitudinal studies conducted by the Learning Policy Institute and published by the Thomas B. Fordham Institute, all but one of the studies showed that teachers generally improve significantly during their first five years. The research review also found continued, albeit slower, improvement well into years 6 through 15; several of the studies found improvement into later years of teaching, though at a diminished pace.

    These gains translate into measurable benefits for students: higher test scores, fewer disciplinary issues, reduced absenteeism and increased postsecondary attainment. In North Carolina, for example, students with highly experienced English teachers learned more and were substantially less likely to skip school and more likely to enjoy reading. These effects were strongest for students who were most at risk of falling behind.

    While experience helps all teachers improve, we’re currently failing to build that experience where it’s needed most. Schools serving large populations of low-income Black and Hispanic students are far more likely to be staffed primarily by early career teachers.

    And unfortunately, they’re also more likely to see those teachers leave after just a few years. This churn makes it nearly impossible to build a stable, experienced workforce in high-need schools.

    It also robs novice teachers of the veteran mentors who could help them get better faster and robs students of the opportunity to learn from seasoned educators who have refined their craft over time.

    To fix this, we need to address both sides of the equation: helping teachers improve and keeping them in the classrooms that need them most.

    Research points to several conditions that support continued teacher growth. Beginning teachers are more likely to stay and improve if they have had high-quality preparation and mentoring. Teaching is not a solo sport. Educators who work alongside more experienced peers improve faster, especially in the early years.

    Teachers also improve more when they’re able to teach the same grade level or subject year after year. Unfortunately, those in under-resourced schools are more likely to be shuffled around, undermining their ability to build expertise.

    Perhaps most importantly, schools that have strong leadership and which foster time for collaboration and a culture of professional trust see greater gains in teacher retention over time.

    Teachers who feel supported by their administrators, who collaborate with a team that shares their mission and who aren’t constantly switching subjects or grade levels are far more likely to stay in the profession.

    Pay matters too, especially in high-need schools where working conditions are toughest. But incentives alone aren’t enough. Short-term bonuses can attract teachers, but they won’t keep them if the work environment drives them away.

    Related: One state radically boosted new teacher pay – and upset a lot of teachers

    If we’re serious about improving student outcomes, especially in the wake of the pandemic, we have to stop treating teacher retention as an afterthought. That means retooling our policies to reflect what the research now clearly shows: experience matters, and it can be cultivated.

    Policymakers should invest in high-quality teacher preparation and mentoring programs, particularly in high-need schools. They should create conditions that promote teacher stability and collaboration, such as protected planning time and consistent teaching assignments.

    Principals must be trained not just as managers, but as instructional leaders capable of building strong school cultures. And state and district leaders must consider meaningful financial incentives and other supports to retain experienced teachers in the classrooms that need them most.

    With the right support, teachers can keep getting better. In this moment of learning recovery, a key to success is keeping teachers in schools and consciously supporting their growing effectiveness.

    Linda Darling-Hammond is founding president and chief knowledge officer at the Learning Policy Institute. Michael J. Petrilli is president of the Thomas B. Fordham Institute, a visiting fellow at the Hoover Institution and an executive editor of Education Next.

    Contact the opinion editor at [email protected].

    This story about teacher retention was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Why English language testing matters for UK higher education

    Why English language testing matters for UK higher education

    The UK is at a pivotal moment when it comes to the English language tests it uses to help decide who can enter the country to study, work, invest and innovate.  

    The government’s new industrial strategy offers a vision for supporting high-value and high-growth sectors. These sectors – from advanced manufacturing and creative industries, to life sciences, clean energy and digital – will fuel the UK’s future growth and productivity. All of them need to attract global talent, and to have a strong talent pipeline, particularly from UK universities. 

    This summer’s immigration white paper set out plans for new English language requirements across a broader range of immigration routes. It comes as the Home Office intends to introduce a new English language test to provide a secure and robust assessment of the skills of those seeking to study and work in the UK.  

    In this context, the UK faces a challenge: can we choose to raise standards and security in English tests while removing barriers for innovators? 

    The answer has to be ‘yes’. To achieve, as the industrial strategy puts it, “the security the country needs… while shaping markets for innovation,” will take vision. That clearly needs government, universities and employers to align security and growth. There are no short-cuts if we are serious about both.  

    The sectors that will power the industrial strategy – most notably in higher education, research and innovation – are also those most boxed in by competing pressures. These pressures include the imperative to attract world-class talent and the need to show that those they help bring to the country are well-qualified.  

    But these pressures do not have to box us in. We need not compromise on security or growth. We can achieve both.   

    Getting English testing right is a critical part of the solution. That means putting quality and integrity first. We should demand world-class security and safeguards – drawing on the most sophisticated combination of human and artificial intelligence. It also means deploying proven innovations – those that have been shown to work in other countries, like Australia and Canada, that have adjusted their immigration requirements while achieving talent-led growth.   

    Decision-making around English language testing needs to be driven by evidence – especially at a time of flux. And findings from multiple studies tells us that those students who take high-quality and in-depth tests demonstrate greater academic resilience and performance. When it comes to high-stake exams, we should be setting the highest expectations for test-takers so they can thrive in the rapidly changing economy that the country is aspiring to build.  

    The government and high-growth sectors, including higher education, have an opportunity to grow public confidence, prioritise quality and attain sustainable growth if we get this right.  

    Decision-making around English language testing needs to be driven by evidence – especially at a time of flux

    International students at UK universities contribute £42 billion a year to the economy. (As an aside, the English language teaching sector – a thriving British export industry – is worth an additional £2 billion a year, supporting 40,000 jobs.) Almost one-in-five NHS staff come from outside the UK. 

    More than a third of the UK’s fastest-growing startups have at least one immigrant co-founder. Such contributions from overseas talent are indispensable to the country’s future success – and the industrial strategy’s “focus on getting the world’s brightest minds to relocate to the UK” is smart.  

    At Cambridge, we help deliver IELTS, the world’s most trusted English test. Over the decades, we’ve learned that quality, security and innovation reinforce one another. It’s why we draw on our constantly evolving knowledge of linguistics to make sure our tests assess the real-life language skills people use in actual academic and professional environments. 

    Technological innovations and human intelligence must be central to the test-taking experience: from content creation to exam supervision to results delivery. Having one without the other would be reckless.    

    We should deploy the latest data science and AI advances to spot risks, pinpoint potential fraud, and act intelligently to guarantee a system that’s fair for all. IELTS draws on proven AI and data science developments to prevent fraud and improve the information available to institutions like universities, businesses and UKVI.  

    As the government takes its industrial strategy, immigration reforms and English testing changes forward, it’s vital that departments coordinate on the shared opportunities, and tap into the best evidence available.  

    This is complex work. It requires a collaborative spirit, creative thinking and deep expertise. Fortunately, the UK has plenty of that. 

    About the author: Pamela Baxter is managing director, IELTS at Cambridge University Press & Assessment

    Source link

  • Release of NAEP science scores

    Release of NAEP science scores

    UPDATE: After this story was published, the Education Department issued a press release Monday afternoon, July 7, announcing that Matthew Soldner will serve as acting commissioner of the National Center for Education Statistics, in addition to his role as acting director of the Institute of Education Sciences. The job of statistics chief had been vacant since March and had prevented the release of assessment results.

    The repercussions from the decimation of staff at the Education Department keep coming. Last week, the fallout led to a delay in releasing results from a national science test.

    The National Assessment of Educational Progress (NAEP) is best known for tests that track reading and math achievement but includes other subjects, too. In early 2024, when the main reading and math tests were administered, there was also a science section for eighth graders. 

    The board that oversees NAEP had announced at its May meeting that it planned to release the science results in June. But that month has since come and gone. 

    Why the delay? There is no commissioner of education statistics to sign off on the score report, a requirement before it is released, according to five current and former officials who are familiar with the release of NAEP scores, but asked to remain anonymous because they were not authorized to speak to the press or feared retaliation. 

    Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.

    Peggy Carr, a Biden administration appointee, was dismissed as the commissioner of the National Center for Education Statistics in February, two years before the end of her six-year term set by Congress. Chris Chapman was named acting commissioner, but he was fired in March, along with half the employees at the Education Department. The role has remained vacant since.

    A spokesman for the National Assessment Governing Board, which oversees NAEP,  said the science scores will be released later this summer, but denied that the lack of a commissioner is the obstacle. “The report building is proceeding so the naming of a commissioner is not a bureaucratic hold-up to its progress,” Stephaan Harris said by email.

    The delay matters. Education policymakers have been keen to learn if science achievement had held steady after the pandemic or tumbled along with reading and math. (Those reading and math scores were released in January.)

    The Trump administration has vowed to dismantle the Education Department and did not respond to an emailed question about when a new commissioner would be appointed. 

    Related: Chaos and confusion as the statistics arm of the Education Department is reduced to a skeletal staff of 3

    Researchers hang onto data

    Keeping up with administration policy can be head-spinning these days. Education researchers were notified in March that they would have to relinquish federal data they were using for their studies. (The department shares restricted datasets, which can include personally identifiable information about students, with approved researchers.) 

    But researchers learned on June 30 that the department had changed its mind and decided not to terminate this remote access. 

    Lawyers who are suing the Trump administration on behalf of education researchers heralded this about-face as a “big win.” Researchers can now finish projects in progress. 

    Still, researchers don’t have a way of publishing or presenting papers that use this data. Since the mass firings in mid-March, there is no one remaining inside the Education Department to review their papers for any inadvertent disclosure of student data, a required step before public release. And there is no process at the moment for researchers to request data access for future studies. 

    “While ED’s change-of-heart regarding remote access is welcome,” said Adam Pulver of Public Citizen Litigation Group, “other vital services provided by the Institute of Education Sciences have been senselessly, illogically halted without consideration of the impact on the nation’s educational researchers and the education community more broadly.  We will continue to press ahead with our case as to the other arbitrarily canceled programs.”

    Pulver is the lead attorney for one of three suits fighting the Education Department’s termination of research and statistics activities. Judges in the District of Columbia and Maryland have denied researchers a preliminary injunction to restore the research and data cuts. But the Maryland case is now fast-tracked and the court has asked the Trump administration to produce an administrative record of its decision-making process by July 11. (See this previous story for more background on the court cases.)

    Related: Education researchers sue Trump administration, testing executive power

    Some NSF grants restored in California

    Just as the Education Department is quietly restarting some activities that DOGE killed, so is the National Science Foundation (NSF). The federal science agency posted on its website that it had reinstated 114 awards to 45 institutions as of June 30. NSF said it was doing so to comply with a federal court order to reinstate awards to all University of California researchers. It was unclear how many of these research projects concerned education, one of the major areas that NSF funds.

    Researchers and universities outside the University of California system are hoping for the same reversal. In June, the largest professional organization of education researchers, the American Educational Research Association, joined forces with a large coalition of organizations and institutions in filing a legal challenge to the mass termination of grants by the NSF. Education grants were especially hard hit in a series of cuts in April and May. Democracy Forward, a public interest law firm, is spearheading this case.

    Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or [email protected].

    This story about delaying the NAEP science score report was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Using Technology to Restore Trust in Testing

    Using Technology to Restore Trust in Testing

    • Francesca Woodward is Group Managing Director for English at Cambridge University Press & Assessment.

    Anyone who has ever taken English language tests to advance in their studies or work knows how important it is to have confidence in their accuracy, fairness and transparency. 

    Trust is fundamental to English proficiency tests. But at a time of digital disruption, with remote testing on the rise and AI tools evolving rapidly, the integrity of English language testing is under pressure.

    Applied proportionally and ethically, technology can boost our trust in the exam process –adapting flexibly to test-takers’ skill levels, for instance, or allowing quicker marking and delivery of results. The indiscriminate use of technology, however, is likely to have unintended and undesirable consequences.

    Technology is not the problem. Overreliance on technology can be. A case in point is the shift to remote language testing that removes substantial human supervision from the process.

    During the pandemic, many educational institutions and test providers were forced to move to online-only delivery. Universities and employers adapted to the exceptional circumstances by recognising results from some of those newer and untried providers.

    The consequences of rushed digital adoption are becoming clear. Students arriving at UK universities after passing newer at-home tests have been found to be poorly equipped, relative to their peers – and more prone to academic misconduct. Students were simply not being set up to succeed.

    Some new at-home tests have since been de-recognised by universities amid reports that they have enabled fraud in the UK. Elsewhere, students have been paying proxies to sit online exams remotely. Online, videos explaining how to cheat on some of the newer tests have become ubiquitous.

    So how can universities mitigate against these risks, while ensuring that genuine test-takers thrive academically?

    When it comes to teaching and learning a language – as well as assessing a learner’s proficiency – human expertise cannot be replaced. This is clear to experts – including researchers at Cambridge, which has been delivering innovation in language learning and testing for more than a century. 

    Cambridge is one of the forces behind IELTS, the world’s most trusted English test. We also deliver Cambridge English Qualifications, Linguaskill and other major assessments. Our experience tells us that people must play a critical role at every step of teaching, assessment and qualification.

    While some may be excited by the prospect of an “AI-first” model of testing, we should pursue the best of both worlds – human oversight prioritised and empowered by AI. This means, for instance, human-proctored tests delivered in test centres that use tried and proven tech tools.

    In language testing – particularly high-stakes language testing, such as for university or immigration purposes – one size does not fit all. While an online test taken at home may be suitable and even secure for some situations for some learners, others prefer or need to be assessed in test centres, where help is on hand and the technology can be consistently relied upon. For test-takers and universities, choice and flexibility are crucial.

    Cambridge has been using and experimenting with AI for decades. We know in some circumstances that AI can be transformative in improving users’ experience. For the highest stakes assessments, innovation alone is no alternative to real human teaching, learning and understanding. And the higher the stakes, the more important human oversight becomes.

    The sector must reaffirm its commitment to quality, rigour and fairness in English language testing. This means resisting shortcuts and challenging providers that are all too ready to compromise on standards. It means investing in human expertise. It means using technology to enhance, not undermine, trust.

    This is not the time to “move fast and break things”. Every test provider, every university and every policymaker must play their part.

    Source link