Tag: Doesnt

  • DEI Orthodoxy Doesn’t Belong in NACE Competencies (opinion)

    DEI Orthodoxy Doesn’t Belong in NACE Competencies (opinion)

    If you’re not a supporter of the progressive DEI agenda, you’re not career ready. That’s one of the messages that the National Association of Colleges and Employers, America’s leading professional association for career placement, is sending to students.

    First established in 1956, NACE boasts a current membership of more than 17,000 dues-paying career services and recruitment professionals. Career counselors and others in higher education often cite NACE’s eight career readiness competencies to help students prepare for the job market and workplace.

    I was planning to use the NACE competencies this semester in a class on how liberal arts education equips students for the professional world and was dismayed to find that partisan criteria had crept into this valuable resource. The list includes—alongside things like teamwork, effective communication and technological proficiency—a competency called Equity & Inclusion. According to NACE, this means that a prospective professional will “engage in anti-oppressive practices that actively challenge the systems, structures, and policies of racism and inequity.”

    If you’re fully career ready, the group says, you will not merely “keep an open mind to diverse ideas and new ways of thinking.” You will also “advocate for inclusion, equitable practices, justice, and empowerment for historically marginalized communities” and will “address systems of privilege that limit opportunities” for members of those communities. In other words, you will subscribe to the view that American society is characterized by systemic racism and will work to break down America’s allegedly racist structure.

    NACE defines “equity” in this light: “Whereas equality means providing the same to all, equity means recognizing that we do not all start from the same place and must acknowledge and make adjustments to imbalances.”

    While these beliefs and attitudes might make someone a good fit at one of a diminishing number of “woke” corporations, they have little to do with career readiness in the ordinary sense of the term. Rather, the language NACE employs in its official materials implies a commitment to an ideological agenda that the organization has mixed into its definition of professional competence. NACE could be teaching students how to navigate the political diversity that characterizes most workplaces. Instead, through its influence in the college career counseling world, it is teaching them that acceptance of progressive orthodoxy on disputed questions of racial justice is a prerequisite for professional employment.

    NACE also does a disservice to students by signaling that workplace political engagement is universally valued by employers. In fact, many companies discourage it, and with good reason. In most work environments, political advocacy is more likely to cause tension and division than it is to foster cooperation and trust.

    As a college teacher and administrator, I’m especially troubled by the fact that NACE is conveying to students that their education should lead them to adopt a certain viewpoint on some of the most contentious political issues. The relationship between equity and equality, for example, is something that should be studied, discussed and debated in college, not taught as authoritative moral and political dogma.

    More generally, the way NACE talks about diversity, equity and inclusion ignores—or perhaps disdains—the political disagreement that is a normal and natural part of life in a democratic society, including the workplace. The organization undermines its professed commitment to open-mindedness when it implies that all open-minded people must be capital-P Progressives on issues such as systemic racism and equitable hiring practices. Like many institutions in recent years, NACE appears to have given in to pressure from activist members and embraced the “antiracist” worldview, sidelining the principles of openness and neutrality that are, or ought to be, hallmarks of professionalism.

    Notably, NACE indicates on its website that its equity and inclusion standard is under review. The organization cites recent “federal Executive Orders and subsequent guidance, as well as court decisions and regulatory changes, [that] may create legal risks that either preclude or discourage campuses and employers from using it.” This is encouraging. Better still would be for NACE to free itself from the ideological commitments that make its materials legally and politically risky in the first place. Let’s hope this venerable organization will get out of the business of DEI advocacy and focus on its core purposes of connecting students with employers and preparing students for professional life.

    Andrew J. Bove is the associate director for academic advising in the College of Liberal Arts and Sciences at Villanova University.

    Source link

  • The higher education “market” still doesn’t work

    The higher education “market” still doesn’t work

    When I was prepping up for Policy Radar in October, I gave some brief thought as to how students are positioned and imagined in the Post-16 Education and Skills White Paper.

    And if you’re not a fan of the student-as-consumer framing that has dominated policy for over a decade, I have bad news.

    “Good value for students” will be delivered through “quality” related conditional fee uplifts, and better information for course choice.

    Ministers promise to “improve the quality of information for individuals” so they can pick courses that lead to “positive outcomes” – classic consumer-style transparency, outcome signalling and value propositions.

    And UCAS is leaned on as the main choice architecture for applicants, promising work to improve the quality, prominence and timing of information that applicants see.

    I won’t repeat here why I don’t think that student-as-consumer is anything like as damaging as some do. It was the subject of the first thing I ever wrote for this site, and the arguments are well-rehearsed.

    What I am interested in here is the extent to which the protections that are supposed to exist for students as consumers are working. And to do that, I thought I’d take a little trip down memory lane.

    Consumers at the heart of the system

    Back in 2013, when reforms were being implemented in England to triple tuition fees to £9,000, there had been a very conscious effort in the White Paper that underpinned those changes to frame students as consumers.

    HEFCE was positioned as a “consumer champion for students” tasked with “promoting competition”, we learned that “putting financial power into the hands of learners makes student choice meaningful” and a partnership with Which? was to improve the presentation of course information to help students get “value for money”.

    The “forces of competition” were to replace the “burdens of bureaucracy” in driving up the quality”, the system was to be designed to be “more responsive to student choice” as a market demand signal, the National Student Survey was positioned as a tool for consumer comparison, and the liberation of number controls that had previously “limit[ed] student choice” was to enable students to “vote with their feet”.

    Students were at the heart of the system – as long as you imagined them as consumers.

    The Office for Fair Trading (OfS) wasn’t so sure. The Competition and Markets Authority’s predecessor body had been lobbied by NUS over terms in student contracts that allowed academic sanctions for non-academic debt – and once that was resolved, it took a wider look at the “market” (for undergraduate students in England) to see whether it was working.

    It was keen to assess whether the risks inherent in applying market mechanisms to public services – information asymmetries, lock-in effects, regulatory gaps, and race-to-the-bottom dynamics – were being adequately managed.

    So it launched a call for information, and just before it got dissolved into the CMA, published a report of its findings with recommendations both for the successor body and government.

    Now, given the white paper has done little to change the framing, the question for me when re-reading it was whether any of the problems it identified are still around, or worse.

    The inquiry was structured around four explicit questions – whether students were able to make well-informed choices that drive competition, whether students were treated fairly when they get to university, whether there was any evidence of anti-competitive behaviour between higher education institutions, and whether the regulatory environment was designed to protect students while facilitating entry, innovation, and managed exit by providers.

    On that third one, it found no evidence of anti-competitive behaviour, and in the White Paper, the CMA is now said to be working with the Department for Education (DfE) to clarify how collaboration between providers can happen within the existing legal framework. It’s the others I’ve looked at in detail below.

    Enabling students to make informed choices

    The OFT’s first investigation area was whether students could make the well-informed choices that the marketisation model relied upon.

    The theoretical benefits of competition – providers competing on quality, students voting with their feet, market forces driving standards – were only going to work if consumers could assess what they were buying. Given education is a “post-experience good” that can’t be judged until after consumption, this was always going to be the trickiest part of making a market work.

    As such, it identified information asymmetry as one of three meta-themes underlying market dysfunction. Students were making life-changing, debt-incurring decisions with incomplete, misleading, inaccessible or outdated information – potentially in breach of Consumer Protection from Unfair Trading Regulations and rendering the entire choice-and-competition model built on sand.

    On teaching quality indicators, students couldn’t find basic information about educational experience. Graham Gibbs’ research had identified key predictors – staff-to-student ratios, funding per student, who teaches, class sizes, contact hours – yet none were readily available. Someone reviewing physics courses couldn’t tell whether they’d get eleven or 25 hours weekly.

    By 2014, the National Student Survey (NSS) was prominent but only indirectly measured teaching quality. Without observable process variables, institutions faced weak incentives to invest in teaching and students couldn’t exert competitive pressure. For OfT, the choice mechanism was essentially decorative.

    On employment outcomes, career prospects were the major decision factor, yet DLHE tracked employment only six months post-graduation when many were in temporary roles. The 40-month longitudinal DLHE had sample sizes too small for course-level statistics – students couldn’t compare actual career trajectories. It was also worried about value-added – employment data didn’t control for intake characteristics. Universities taking privileged students looked advantageous regardless of what they actually contributed – for the OfT, that risked perverse incentives where institutions were rewarded for cream-skimming privileged students rather than adding educational value.

    It was also worried about prestige signals like entry requirements and research rankings crowding out quality signals. Presenting outcomes without contextualising intake breached a basic market principle – for the OFT, consumers should assess product quality independent of customer characteristics. And on hidden costs, an NUS survey had found 69 per cent of undergraduates incurred additional charges beyond tuition – equipment hire, studio fees, bench fees – many of which were unknown when applying, raising legal concerns and practical affordability questions.

    The OFT recommended that HEFCE’s ongoing information review address coverage gaps around the learning environment including contact hours, class sizes and teaching approaches; that HEFCE and the sector focus on improving quality and comparability of long-term employment and salary data; that employment data account for institutions taking students with different backgrounds and abilities, acknowledging significant methodological challenges around controlling for prior attainment, socioeconomic background and subject mix; and that material information about additional costs be disclosed to avoid misleading omissions.

    A decade later, things are much worse. DiscoverUni replaced Unistats but core Gibbs indicators remain absent. Contact hours became a political football – piloted as a TEF metric in 2017, abandoned as unworkable, then demanded by ministers in 2022 with sector resistance fearing “Mickey Mouse degrees” tabloid headlines. Staff ratios, class sizes and teaching qualifications still aren’t standardised. The TEF provides gold/silver/bronze ratings but doesn’t drill down to process variables or subject areas predicting actual experience.

    On employment outcomes, things are marginally better but inadequate. Graduate Outcomes tracks employment at 15 months rather than six, but there’s still no standardised long-term earnings trajectory data at course level. On value-added, the situation is virtually unchanged. OfS uses benchmarks in regulation but these aren’t prominently displayed for prospective students. IFS research periodically demonstrates dramatic differences between raw and adjusted outcomes, but this isn’t integrated into official student-facing information.

    The Russell Group benefits enormously from selecting privileged students whose career prospects would be strong regardless of institutional quality. Students can’t distinguish educational quality from privilege – arguably worse given increased marketing of graduate salary data without the context that would make it meaningful. And on hidden costs, the picture is mixed and hard to assess. There is no standardised disclosure format, no regulatory requirement for prominence at application, and a real mess over wider participation costs. The fundamental issue persists.

    Most importantly, well-informed choices pretty much rely on the idea that information is predictive – whether you’re talking about higher education’s experience outputs or its outcomes, what a student is told is supposed to signal what they’ll get. But rapid contraction of courses (and modules within courses), coupled with significant changes in the labour market, all mean that prediction is becoming increasingly futile. That’s a market that, on OfT terms, doesn’t work.

    The student experience at university

    Back in 2013, the OFT identified lock-in effects as the second of three meta-themes undermining the market model.

    Once enrolled, students were effectively trapped by high switching costs, weak credit transfer, financial complications and social costs. For the regulator, that fundamentally broke the competitive mechanism that the entire reform package relied upon. If students couldn’t credibly exit poor provision, institutions faced weak pressure to maintain quality after enrolment. The threat of exit – essential to making markets work – was largely hollow. That enabled institutions to change terms, raise fees and alter courses with relative impunity.

    It found only 1.9 per cent of students switched institutions nationally. While around 90 per cent of institutions awarded credits in theory, there was no guaranteed right to transfer them with assessment happening case by case. Information about credit transfer was technical and non-user friendly. Students faced multiple barriers including difficulty assessing credit equivalence, poor information, financial complications and high social costs of relocating. And students leaving mid-year had to wait until next academic year to access funding again, particularly trapping disadvantaged students in unsuitable courses.

    On fees and courses changing mid-stream, the OFT received reports of fees increasing mid-way through courses, particularly for international students – 58 per cent of institutions didn’t offer fixed tuition for international students on courses over one year. That contravened principles requiring students to know total costs upfront and potentially constituted aggressive commercial practices by exploiting students’ constrained positions.

    Course changes posed similar problems – locations changing, modules reduced, lectures moved to weekends, content changing, modules unavailable. Terms permitting key features to change without valid reason were potentially unfair.

    On misleading information, the OFT heard concerns about false or misleading information about graduate prospects, accreditation, qualification type, course content and facilities, breaching Consumer Protection from Unfair Trading Regulations. Institutions also failed to inform students of potential fee increases, course changes and mandatory additional charges – material omissions affecting informed decisions.

    On complaints and redress, while resolution times were improving from 20 per cent taking over a year in 2009 to 5 per cent, still 12 per cent took six-plus months. Students often graduated before complaints were resolved. A power imbalance between students and institutions required accessible, clear pathways – yet students reported difficulty finding complaint forms, fear of complaining and being put off by bureaucratic processes. Many were unaware of the OIA or how to access it. There was no public data on complaints handled internally by institutions, meaning systemic problems remained hidden and students couldn’t make informed choices between institutions.

    The OFT didn’t make formal recommendations on credit transfer, noting that difficulties arose partly from inevitable variations in how institutions structure degrees, but highlighted that institutions appeared to lack processes for assessing credit equivalence. It implied that fees and course terms needed greater transparency and stability, that misleading information must be eliminated, that academic sanctions should only apply to academic debt, that complaint processes needed to be faster and more accessible with transparency about complaint volumes, that OIA coverage should be comprehensive, and that the structural barriers to price competition needed addressing.

    A decade later, the picture is bleak. Credit transfer has worsened substantially – despite being crucial to the Lifelong Learning Entitlement, it remains one of those old chestnuts where the collective impulse is to explain why it cannot happen. Multiple government attempts have been unsuccessful, and recent OIA complaints show students still don’t realise until too late that transferring will significantly impact loan funding or bursaries.

    On fees and courses changing, the problem persists and legal standards have tightened considerably with both Ofcom and the CMA now viewing inflation-linked mid-contract price increases as causing consumer harm. The 2024 increase to £9,535 exposed widespread non-compliance with many institutions lacking legally sound terms.

    Unilateral course changes without proper consent remain endemic. The CMA secured undertakings from UEA in 2017, and recent OfS and Trading Standards interventions have identified unreasonably wide discretion in terms, and this summer when I looked, less than a third had deleted industrial action from force majeure clauses.

    On misleading information, the DMCC Act has tightened requirements but enforcement is patchy and two-tier with new providers facing enhanced scrutiny while registered providers don’t face the same requirements. Students still cannot bring direct legal claims for misleading omissions.

    On complaints, in 2021 the OIA closed 2,654 complaints but failed to meet its KPI of closing 75 per cent within six months, and the OIA’s influence seems to be waning – with providers implementing good practice recommendations on time dropping from 88 per cent in 2018 to just 60 per cent recently – significantly worse than 2014. Provider websites still include demotivating language about the OIA having no regulatory powers, and there’s still no public data on internal complaints.

    Almost every problem identified has persisted or worsened. Credit transfer remains a policy aspiration without practical implementation. Mid-course changes have intensified under financial pressure. Complaints resolution has deteriorated. Price competition remains absent. Students remain locked into courses with weak protections against opportunistic behaviour by institutions under financial strain.

    The regulatory environment

    The OFT identified regulatory-market misalignment as the third meta-theme. A framework designed for a government-funded sector was governing a student-funded market. As funding shifted, areas without direct public funding fell outside regulatory oversight, creating gaps in student protection and quality assurance. The regulatory architecture hadn’t caught up with the marketisation it was supposed to facilitate.

    It found a system that relied on ad hoc administrative arrangements on decades-old frameworks, lacking democratic legitimacy and a clear statutory basis. Multiple overlapping responsibilities created extreme complexity – the Regulatory Partnership Group produced an Operating Framework just to map arrangements.

    The OFT’s recommendations were implicit – comprehensive reform with primary legislation, simplified structures, reduced uncertainty, accommodation of innovation, competitive neutrality, independent quality assurance, clear exit regimes and quality safeguards.

    Later in the decade, HERA 2017 provided primary legislation establishing the Office for Students (OfS) with statutory frameworks, attempting to address the funding model misalignment. But complexity has arguably worsened dramatically – and beyond OfS, providers and their students are supposed to navigate DfE, UKVI, HESA, QAA, OIA, EHRC, employment law, charity law, Foreign Influence Registration, Prevent and more.

    Crucially, from a student perspective, enrolling is now riskier. Student Protection Plans exist but in sudden insolvency required funds are unlikely protected. OfS has limited teach-out quality monitoring. Plans are outdated and unrealistic – significantly worse than 2014. With financial pressures, there’s evidence of quality degradation – staff leaving, class sizes dwindling, any warm body delivering modules – yet OfS has no meaningful monitoring.

    Survival strategies involve cutting contact hours, study support, module choices and learning resources. Quality floor enforcement remains weak. OFT’s predicted race to the bottom may be materialising.

    What the OFT didn’t see coming

    The 2014 report identified market failures within domestic undergraduate provision but couldn’t anticipate how internationalisation would create entirely new categories of consumer harm. The report barely addressed international students – who by 2024 would represent over 30 per cent of the student body at many institutions.

    International student recruitment spawned multiple interlocking problems. International postgraduate taught students face hefty non-refundable deposits. When students discover agents pushed unsuitable courses or accommodation falls through they lose thousands, creating a regulatory dead-end where CMA refers complaints to OfS, OfS can’t update on progress and OIA says applicants aren’t yet students. UK universities pay agents 5-30 per cent of first-year tuition yet BUILA and UUKi guidelines advise against publishing commission fees. A BUILA survey found significant proportions of recruitment staff believe agents prioritise higher commission over best-fit programmes. A model where these “vulnerable consumers” are only around for a year and whose immigration status is managed by the university is not an ideal breeding ground for consumer confidence when something goes wrong.

    Fee transparency has also emerged as a distinct problem the OFT couldn’t anticipate. Universities’ fee increase policies fail to comply with DMCC drip pricing requirements, using vague language like “fees may rise with inflation” without specifying an index, amount or giving equal prominence. The DMCC Act Section 230 strengthens requirements around total cost presentation – yet widespread non-compliance exists with no enforcement.

    Time for a re-run

    David Behan’s 2024 review of OfS argued that regulating in the student interest required OfS to act as a consumer protection regulator, noting the unique characteristics of higher education as a market where students make one-off, life-changing choices that cannot easily be reversed.

    He recommended OfS be given new powers to address consumer protection issues systematically, including powers to investigate complaints, impose sanctions for unfair practices and require institutions to remedy harm.

    The Post-16 Education and Skills White Paper contains no sign of these powers. Instead, OfS has developed something called “treating students fairly” as part of its regulatory framework, which applies only to new providers joining the register, and exempts the 130-plus existing providers where the problems concentrate.

    The framework doesn’t address CAS allocation crises, agent commission opacity, accommodation affordability, the mess of participation costs information, mid-contract price increases, clauses that limit compensation for breach of contract to the total paid in fees, under and over-recruitment, restructures that render promises meaningless, a lack of awareness of rights over changes, weak regulation on disabled students’ access, protection that doesn’t work and regulator that hopes students have paid their fees by credit card. The issues the OfT identified in 2014 have not been resolved – they have intensified and multiplied alongside entirely new categories of harm that never appeared in the original review. And in any case, OfS only covers England.

    There are also so many issues I’ve not covered off in detail – not least the hinterland of ancillary markets that quietly shape the “purchase”. Accommodation tie-ins and exclusive nomination deals that funnel applicants into PBSA on university letterheads. Guarantor insurance and “admin fees by another name”. Pressure-selling tactics at Clearing. Drip pricing across compulsory materials, fieldwork and resits with no total cost of ownership up front.

    International applicants squeezed by CAS timing, opaque visa-refusal refunds and agent commission structures the sector still won’t publish. And in the franchising boom, students can’t tell who their legal counterparty is, Student Protection Plans don’t bite cleanly down the chain, and complaints ping-pong between delivery partner, validator and redress schemes.

    Then there’s invisible digital and welfare layers that a consumer lens keeps missing. VLE reliability and service levels that would trigger service credits in any other sector but here are just “IT issues”. Prospectuses that promise personalised disability or welfare support without disclosing capacity limits or waiting times. Placements and professional accreditation marketed as features, then quietly downgraded with “not guaranteed” microprint when markets tighten.

    And the quiet austerity of mid-course “variation” – fewer options, thinner contact, shorter opening hours, more asynchronous delivery – with no price adjustment, no consent and no meaningful exit. If this is a market, where are the market remedies?

    What’s needed ideally is a bespoke set of student rights that recognise the distinctive features of higher education as an experience – the information asymmetries, the post-experience good characteristics, the lock-in effects, the visa and immigration entanglements and the power imbalances between institutions and individuals.

    But if that’s not coming – and the White Paper suggests it isn’t – then the market architecture remains, and with it the need for functioning regulation.

    The CMA should do its job. It should re-run the 2014 review to assess how the market has evolved over the past decade, expand its coverage to include the issues that have emerged, and use the powers that the DMCC Act has given it. By its own definitions, the evidence of harm is overwhelming.

    Source link

  • We’re testing preschoolers for giftedness. Experts say that doesn’t work

    We’re testing preschoolers for giftedness. Experts say that doesn’t work

    by Sarah Carr, The Hechinger Report
    October 31, 2025

    When I was a kindergartner in the 1980s, the “gifted” programming for my class could be found inside of a chest. 

    I don’t know what toys and learning materials lived there, since I wasn’t one of the handful of presumably more academically advanced kiddos that my kindergarten teacher invited to open the chest. My distinct impression at the time was that my teacher didn’t think I was worthy of the enrichment because I frequently spilled my chocolate milk at lunch and I had also once forgotten to hang a sheet of paper on the class easel — instead painting an elaborate and detailed picture on the stand itself. The withering look on my teacher’s face after seeing the easel assured me that, gifted, I was not.

    The memory, and the enduring mystery of that chest, resurfaced recently when New York City mayoral front-runner Zohran Mamdani announced that if elected on Nov. 4, he would support ending kindergarten entry to the city’s public school gifted program. While many pundits and parents debated the political fallout of the proposal — the city’s segregated gifted program has for decades been credited with keeping many white and wealthier families in the public school system — I wondered what exactly it means to be a gifted kindergartner. In New York City, the determination is made several months before kindergarten starts, but how good is a screening mechanism for 4-year-olds at predicting academic prowess years down the road? 

    New York is not unique for opting to send kids as young as preschool down an accelerated path, no repeat display of giftedness required. It’s common practice at many private schools to try to measure young children’s academic abilities for admissions purposes. Other communities, including Houston and Miami, start gifted or accelerated programs in public schools as early as kindergarten, according to the National Center for Research on Gifted Education. When I reported on schools in New Orleans 15 years ago, they even had a few gifted prekindergarten programs at highly sought after public schools, which enrolled 4-year-olds whose seemingly stunning intellectual abilities were determined at age 3. It’s more common, however, for gifted programs in the public schools to start between grades 2 and 4, according to the center’s surveys.

    There is an assumption embedded in the persistence of gifted programs for the littles that it’s possible to assess a child’s potential, sometimes before they even start school. New York City has followed a long and winding road in its search for the best way to do this. And after more than five decades, the city’s experience offers a case study in how elusive — and, at times, distracting — that quest remains. 

    Three main strategies are used to assign young children to gifted programs, according to the center. The most common path is cognitive testing, which attempts to rate a child’s intelligence in relation to their peer group. Then there is achievement testing, which is supposed to measure how much and how fast a child is learning in school. And the third strategy is teacher evaluations. Some districts use the three measures in combination with each other.

    For nearly four decades, New York prioritized the first strategy, deploying an ever-evolving array of cognitive and IQ tests on its would-be gifted 4-year-olds — tests that families often signed up for in search of competitive advantage as much as anything else.

    Several years ago, a Brooklyn parent named Christine checked out an open house for a citywide gifted elementary school, knowing her child was likely just shy of the test score needed to get in. (Christine did not want her last name used to protect her daughter’s privacy.) 

    The school required her to show paperwork at the door confirming that her daughter had a relatively high score; and when Christine flashed the proof, the PTA member at the door congratulated her. That and the lack of diversity gave the school an exclusive vibe, Christine recalled. 

    “The resources were incredible,” she said. “The library was huge, there was a room full of blocks. It definitely made me envious, because I knew she was not getting in.” Yet years later, she feels “icky” about even visiting.

    Eishika Ahmed’s parents had opportunities of all kinds in mind when they had her tested for gifted kindergarten nearly two decades ago. Ahmed, now 23, remembers an administrator in a small white room with fluorescent lights asking her which boat in a series of cartoonish pictures was “wide.” The then 4-year-old had no idea. 

    “She didn’t look very pleased with my answer,” Ahmed recalled. She did not get into the kindergarten program.

    Related: Young children have unique needs and providing the right care can be a challenge. Our free early childhood education newsletter tracks the issues. 

    Equity and reliability have been long-running concerns for districts relying on cognitive tests.

    In New York, public school parents in some districts were once able to pay private psychologists to evaluate their children — a permissiveness that led to “a series of alleged abuses,” wrote Norm Fruchter, a now-deceased activist, educator and school board leader in a 2019 article called “The Spoils of Whiteness: New York City’s Gifted and Talented Programs.”

    In New Orleans, there was a similar disparity between the private and public testing of 3-year-olds when I lived and reported on schools there. Families could sit on a waitlist, sometimes for months, to take their children through the free process at the district central office. In 2008, the year I wrote about the issue, only five of the 153 3-year-olds tested by the district met the gifted benchmark. But families could also pay a few hundred dollars and go to a private tester who, over the same time period, identified at least 64 children as gifted. “I don’t know if everybody is paying,” one parent told me at the time, “but it defeats the purpose of a public school if you have to pay $300 to get them in.”

    Even after New York City districts outlawed private testers, concerns persisted about parents paying for pricey and extensive test prep to teach them common words and concepts featured on the tests. Moreover, some researchers have worried about racial and cultural bias in cognitive tests more generally. Critics, Fruchter wrote, had long considered them at least partly to assess knowledge of the “reigning cultural milieu in which test-makers and applicants alike were immersed.”

    Across the country, these concerns have led some schools and districts, including New York City, to shift to “nonverbal tests,” which try to assess innate capacity more than experience and exposure. 

    But those tests haven’t made cognitive testing more equitable, said Betsy McCoach, a professor of psychometrics and quantitative psychology at Fordham University and co-principal investigator at the National Center for Research on Gifted Education.

    “There is no way to take prior experience out of a test,” she said. “I wish we could.” Children who’ve had more exposure to tests, problem-solving and patterns are still going to have an advantage on a nonverbal test, McCoach added. 

    And no test can overcome the fact that for very young children, scores can change significantly from year to year, or even week to week. In 2024, researchers analyzed more than 200 studies on the stability of cognitive abilities at different ages. They found that for 4-year-olds, cognitive test scores are not very predictive of long-term scores — or even, necessarily, short-term ones. 

    There’s not enough stability “to say that if we assess someone at age 4, 5, 6 or 7 that a child would or wouldn’t be well-served by being in a gifted program” for multiple years, said Moritz Breit, the lead author of the study and a post-doctoral researcher in the psychology department at the University of Trier in Germany.

    Scores don’t start to become very consistent until later in elementary school, with stability peaking in late adolescence.

    But for 4-year-olds? “Stability is too low for high-stakes decisions,” he said.

    Eishika Ahmed is just one example of how early testing may not predict future achievement. Even though she did not enroll in the kindergarten gifted program, by third grade she was selected for an accelerated program at her school called “top class.”

    Years later, still struck by the inequity of the whole process, she wrote a 2023 essay for the think tank The Century Foundation about it. “The elementary school a child attends shouldn’t have such significant influence over the trajectory of their entire life,” she wrote. “But for students in New York City public schools, there is a real pipeline effect that extends from kindergarten to college. Students who do not enter the pipeline by attending G&T programs at an early age might not have the opportunity to try again.”

    Partly because of the concerns about cognitive tests, New York City dropped intelligence testing entirely in 2021 and shifted to declaring kindergartners gifted based on prekindergarten teacher recommendations. A recent article in Chalkbeat noted that after ending the testing for the youngest, diversity in the kindergarten gifted program increased: In 2023-24, 30 percent of the children were Black and Latino, compared to just 12 percent in 2020, Chalkbeat reported. Teachers in the programs also describe enrolling a broader range of students, including more neurodivergent ones. 

    The big problem, according to several experts, is that when hundreds of individual prekindergarten teachers evaluate 4-year-olds for giftedness, any consistency in defining it can get lost, even if the teachers are guided on what to look for. 

    “The word is drained of meaning because teachers are not thinking about the same thing,” said Sam Meisels, the founding executive director of the Buffett Early Childhood Institute at the University of Nebraska.

    Breit said that research has found that teacher evaluations and grades for young children are less stable and predictive than the (already unstable) cognitive testing. 

    “People are very bad at looking at another person and inferring a lot about what’s going on under the hood,” he said. “When you say, ‘Cognitive abilities are not stable, let’s switch to something else,’ the problem is that there is nothing else to switch to when the goal is stability. Young children are changing a lot.”

    Related: PROOF POINTS: How do you find a gifted child? 

    No one denies that access to gifted programming has been transformative for countless children. McCoach, the Fordham professor, points out that there should be something more challenging for the children who arrive at kindergarten already reading and doing arithmetic, who can be bored moving at the regular pace.

    In an ideal world, experts say, there would be universal screening for giftedness (which some districts, but not New York, have embraced), using multiple measures in a thoughtful way, and there would be frequent entry — and exit — points for the programs. In the early elementary years, that would look less like separate gifted programming and a lot more like meeting every kid where they are. 

    “The question shouldn’t really be: Are you the ‘Big G’?” said McCoach. “That sounds so permanent and stable. The question should be: Who are the kids who need something more than what we are providing in the curriculum?”

    But in the real world, individualized instruction has frequently proved elusive with underresourced schools, large class sizes and teachers who are tasked with catching up the students who are furthest behind. That persistent struggle has provided advocates of gifted education in the early elementary years with what’s perhaps their most powerful argument in sustaining such programs — but it reminds me of that old adage about treating the symptom rather than the disease. 

    At some point a year or two after kindergarten, I did get the chance to be among the chosen when I was selected for a pull-out program known as BEEP. I have no recollection of how we were picked, how often we met or what we did, apart from a performance the BEEP kids held of St. George and the Dragon. I played St. George and I remember uttering one line, declaring my intent to fight the dragon or die. I also remember vividly how much being in BEEP boosted my confidence in my potential — probably its greatest gift.

    Forty years later, the research is clear that every kid deserves the chance — and not just one — to slay a dragon. “You want to give every child the best opportunity to learn as possible,” said Meisels. But when it comes to separate gifted programming for select early elementary school students, “Is there something out there that says their selection is valid? We don’t have that.” 

    “It seems,” he added, “to be a case of people just fooling themselves with the language.” 

    Contact contributing writer Sarah Carr at [email protected]. 

    This story about gifted education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

    This <a target=”_blank” href=”https://hechingerreport.org/were-testing-preschoolers-for-giftedness-experts-say-that-doesnt-work/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113170&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/were-testing-preschoolers-for-giftedness-experts-say-that-doesnt-work/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Texas Teachers, Parents Fear STAAR Overhaul Doesn’t Do Enough – The 74

    Texas Teachers, Parents Fear STAAR Overhaul Doesn’t Do Enough – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Texas public school administrators, parents and education experts worry that a new law to replace the state’s standardized test could potentially increase student stress and the amount of time they spend taking tests, instead of reducing it.

    The new law comes amid criticism that the State of Texas Assessment of Academic Readiness, or STAAR, creates too much stress for students and devotes too much instructional time to the test. The updated system aims to ease the pressure of a single exam by replacing STAAR with three shorter tests, which will be administered at the beginning, middle and end of the year. It will also ban practice tests, which Texas Education Agency Commissioner Mike Morath has said can take up weeks of instruction time and aren’t proven to help students do better on the standardized test. But some parents and teachers worry the changes won’t go far enough and that three tests will triple the pressure.

    The law also calls for the TEA to study how to reduce the weight testing carries on the state’s annual school accountability ratings — which STAAR critics say is one reason why the test is so stressful and absorbs so much learning time — and create a way for the results of the three new tests to be factored into the ratings.

    That report is not due until the 2029-30 school year, and the TEA is not required to implement those findings. Some worry the new law will mean schools’ ratings will continue to heavily depend on the results from the end-of-year test, while requiring students to start taking three exams. In other words: same pressure, more testing.

    Cementing ‘what school districts are already doing’

    The Texas Legislature passed House Bill 8 during the second overtime lawmaking session this year to scrap the STAAR test.

    Many of the reforms are meant to better monitor students’ academic growth throughout the school year.

    For the early and mid-year exams, schools will be able to choose from a menu of nationally recognized assessments approved by the TEA. The agency will create the third test. Under the law, the three new tests will use percentile ranks comparing students to their peers in Texas; the third will also assess a student’s grasp of the curriculum.

    In addition, scores will be required to be released about two days after students take the exam, so teachers can better tailor their lessons to student needs.

    State Sen. Paul Bettencourt, R-Houston, one of the architects behind the push to revamp the state’s standardized test, said he would like the first two tests to “become part of learning” so they can help students prepare for the end-of-year exam.

    But despite the changes, the new testing system will likely resemble the current one when it launches in the 2027-28 school year, education policy experts say.

    “It’s gonna take a couple of years before parents realize, to be honest, that you know, did they actually eliminate STAAR?” said Bob Popinski with Raise Your Hand Texas, an education advocacy nonprofit.

    Since many schools already conduct multiple exams throughout the year, the law will “basically codify what school districts are already doing,” Popinski said.

    Lawmakers instructed TEA to develop a way to measure student progress based on the results from the three tests. But that metric won’t be ready when the new testing system launches in the 2027-28 school year. That means results from the standardized tests, and their weight in the state’s school accountability ratings system, will remain similar to what they are now.

    Every Texas school district and campus currently receives an A-F rating based on graduation benchmarks and how students perform on state tests, their improvement in those areas, and how well they educate disadvantaged students. The best score out of the first two categories accounts for most of their overall rating. The rest is based on their score in the last category.

    The accountability ratings are high stakes for school districts, which can face state sanctions for failing grades — from being forced to close school campuses to the ousting of their democratically elected school boards.

    Supporters of the state’s accountability system say it is vital to assess whether schools are doing a good job at educating Texas children.

    “The last test is part of the accountability rating, and that’s not going to change,” Bettencourt said.

    Critics say the current ratings system fails to take into account a lot of the work schools are doing to help children succeed outside of preparing them for standardized tests.

    “Our school districts are doing a lot of interesting, great things out there for our kids,” Popinski said. “Academics and extracurricular activities and co-curricular activities, and those just aren’t being incorporated into the accountability report at all.”

    In response to calls to evaluate student success beyond testing, HB 8 also instructs the TEA to track student participation in pre-K, extracurriculars and workforce training in middle schools. But none of those metrics will be factored into schools’ ratings.

    “There is some other interest in looking at other factors for accountability ratings, but it’s not mandated. It’s just going to be reviewed and surveyed,” Bettencourt said.

    Student stress worries

    Even though many schools already conduct testing throughout the year, Popinski said the new system created by HB 8 could potentially boost test-related stress among students.

    State Rep. Brad Buckley, R-Salado, who sponsored the testing overhaul in the Texas House, wrote in a statement that “TEA will determine testing protocols through their normal process.” This means it will be up to TEA to decide whether to keep or change the rules that it currently uses for the STAAR test. Those include that schools dedicate three to four hours to the exam and that administrators create seating charts, spread out desks and manage restroom breaks.

    School administrators said the worst-case scenario would be if all three of the new tests had to follow lockdown protocols like the ones that currently come with STAAR. Holly Ferguson, superintendent of Prosper ISD, said the high-pressure environment associated with the state’s standardized test makes some of her students ill.

    “It shouldn’t be that we have kids sick and anxiety is going through the roof because they know the next test is coming,” Ferguson said.

    The TEA did not respond to a request for comment.

    HB 8 also seeks to limit the time teachers spend preparing students for state assessments, partly by banning benchmark tests for 3-8 grades. Bettencourt told the Tribune the new system is expected to save 22.5 instructional hours per student.

    Buckley said the new law “will reduce the overall number of tests a student takes as well as the time they spend on state assessments throughout the school year, dramatically relieving the pressure and stress caused by over-testing.”

    But some critics worry that any time saved by banning practice tests will be lost by testing three times a year. In 2022, Florida changed its testing system from a single exam to three tests at the beginning, middle and end of the year. Florida Gov. Ron DeSantis said the new system would reduce test time by 75%, but the number of minutes students spent taking exams almost doubled the year the new system went into effect.

    Popinski added that much of the stress the test induces comes from the heavy weight the end-of-year assessment holds on a school’s accountability rating. The pressure to perform that the current system places on school district administrators transfers to teachers and students, critics have said.

    “The pressures are going to be almost exactly the same,” Popinski said.

    What parents, educators want for the new test

    Retired Fort Worth teacher Jim Ekrut said he worries about the ban on practice tests, because in his experience, test preparations helped reduce his students’ anxiety.

    Ekrut said teachers’ experience assessing students is one reason why educators should be involved in creating the new end-of-year exam.

    “The better decisions are going to be made with input from people right on that firing line,” Ekrut said.

    HB 8 requires that a committee of educators appointed by the commissioner reviews the new test that TEA will create. Some, like Ferguson and David Vinson, former superintendent of Wylie ISD who started at Conroe this week, said they hope the menu of possible assessments districts can pick for the first two tests includes a national program they already use called Measures of Academic Progress, or MAP.

    The Prosper and Wylie districts are some that administer MAP exams at the beginning, middle and end of the year. More than 4,500 school districts nationwide use these online tests, which change the difficulty of the questions as students log their answers to better assess their skill level and growth. A 2024 study conducted by the organization that runs MAP found that the test is a strong indicator of how students perform on the end-of-year standardized test.

    Criteria-based tests like STAAR measure a student’s grasp on grade-level skills, whereas norm-based exams like MAP measure a student’s growth over the course of instruction. Vinson described this program as a “checkup,” while STAAR is an “autopsy.”

    Rachel Spires, whose children take MAP tests at Sunnyvale ISD, said MAP testing doesn’t put as much pressure on students as STAAR does.

    Spires said her children’s schedules are rearranged for the month of April, when Sunnyvale administers the STAAR test, and parents are barred from coming to campus for lunch. MAP tests, on the other hand, typically take less time to complete, and the school has fewer rules for how they are administered.

    “When the MAP tests come around, they don’t do the modified schedules, and they don’t do the review packets and prep testing or anything like that,” Spires said. “It’s just like, ‘Okay, tomorrow you’re gonna do a MAP test,’ and it’s over in like an hour.”

    For Ferguson, the Prosper ISD superintendent, a relaxed environment around testing is key to achieving the new law’s goal of reducing student stress.

    “If it’s just another day at school, I’m all in,” Ferguson said. “But if we lock it down, and we create a very compliance-driven system that’s very archaic and anxiety- and worry-inducing to the point that it starts having potential harmful effects on our kids … our teachers and our parents, I’m not okay with that.”

    This article originally appeared in The Texas Tribune at https://www.texastribune.org/2025/09/24/texas-staar-replacement-map-testing/. The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • What OfS’ data on harassment and sexual misconduct doesn’t tell us

    What OfS’ data on harassment and sexual misconduct doesn’t tell us

    New England-wide data from the Office for Students (OfS) confirms what we have known for a long time.

    A concerningly high number of students – particularly LGBTQ+ and disabled people, as well as women – are subjected to sexual violence and harassment while studying in higher education. Wonkhe’s Jim Dickinson reviews the findings elsewhere on the site.

    The data is limited to final year undergraduates who filled out the National Student Survey, who were then given the option to fill out this further module. OfS’ report on the data details the proportion of final year students who experienced sexual harassment or violence “since being a student” as well as their experiences within the last 12 months.

    It also includes data on experiences of reporting, as well as prevalence of staff-student intimate relationships – but its omission of all postgraduate students, as well as all undergraduates other than final year students means that its findings should be seen as one piece of a wider puzzle.

    Here, I try to lay out a few of the other pieces of the puzzle to help put the new data in context.

    The timing is important

    On 1st August 2025 the new condition of registration for higher education providers in England came into force, which involves regulatory requirements for all institutions in England to address harassment and sexual misconduct, including training for all staff and students, taking steps to “prevent abuses of power” between staff and students, and requiring institutions to publish a “single, comprehensive source of information” about their approach to this work, including support services and handling of reports.

    When announcing this regulatory approach last year, OfS also published two studies published in 2024 – a pilot prevalence survey of a small selection of English HEIs, as well as a ‘poll’ of a representative sample of 3000 students. I have discussed that data as well as the regulation more generally elsewhere.

    In this year’s data release, 51,920 students responded to the survey with an overall response rate of 12.1 per cent. This is significantly larger sample size than both of the 2024 studies, which comprised responses from 3000 and 5000 students respectively.

    This year’s survey finds somewhat lower prevalence figures for sexual harassment and “unwanted sexual contact” than last year’s studies. In the new survey, sexual harassment was experienced by 13.3 per cent of respondents within the last 12 months (and by 24.5 per cent since becoming a student), while 5.4 per cent of respondents had been subjected to unwanted sexual contact or sexual violence within the last 12 months (since becoming a student, this figure rises to 14.1 per cent).

    By any measure, these figures represent a very concerning level of gender-based violence in higher education populations. But if anything, they are at the lower end of what we would expect.

    By comparison, in OfS’ 2024 representative poll of 3000 students, over a third (36 per cent) of respondents had experienced some form of unwanted sexual contact since becoming a student with a fifth (21 per cent) stating the incident(s) happened within the past year. 61 per cent had experienced sexual harassment since being a student, and 43 per cent of the total sample had experienced this in the past year.

    The lower prevalence in the latest dataset could be (in part) because it draws on a population of final year undergraduate students – studies from the US have repeatedly found that first year undergraduate students are at the greatest risk, especially when they start their studies.

    Final year students may simply have forgotten – or blocked out – some of their experiences from first year, leading to lower prevalence. They may also have dropped out. The timing of the new survey is also important – the NSS is completed in late spring, while we would expect more sexual harassment and violence to occur when students arrive at university in the autumn.

    A study carried out in autumn or winter might find higher prevalence. Indeed, the previous two studies carried out by OfS involved data collected at different times to year – in August 2023 (for the 3000-strong poll) and ‘autumn 2023’ (for the pilot prevalence study).

    A wide range of prevalence

    Systematic reviews published in 2023 from Steele et al and Lagdon et al from across the UK, Ireland and the US have found prevalence rates of sexual violence between 7 per cent to 86 per cent.

    Steele et al.’s recent study of Oxford University found that 20.5 per cent of respondents had experienced at least one act of attempted or forced sexual touching or rape, and 52.7 per cent of respondents experienced at least one act of sexual harassment within the past year.

    Lagdon et al.’s study of “unwanted sexual experiences” in Northern Ireland found that a staggering 63 per cent had been targeted. And my own study of a UK HEI found that 30 per cent of respondents had been subjected to sexual violence since enrolling in their university, and 55 per cent had been subjected to sexual harassment.

    For now, I don’t think it’s helpful to get hung up on comparing datasets between last year and this year that draw on somewhat different populations. It’s also not necessarily important that respondents were self-selecting within those who filled out the NSS – a US study compared prevalence rates for sexual contact without consent among students between a self-selecting sample and a non-self-selecting sample, finding no difference.

    The key take-home message is that students are being subject to a significant level of sexual harassment and violence, and particularly women, LGBTQ+ and disabled students are unable to access higher education in safety.

    Reporting experiences

    The findings on reporting reveals some important challenges for the higher education sector. According to the OfS new survey findings, rates of reporting to higher education institutions remain relatively low at 13.2 per cent of those experiencing sexual harassment, and 12.7 per cent of those subjected to sexual violence.

    Of students who reported to their HEI, only around half of rated their experience as “good”. But for women as well as for disabled and LGBTQ+ students there were much lower rates of satisfaction with reporting than men, heterosexuals and non-disabled students who reported incidents to their university.

    This survey doesn’t reveal why students were rating their reporting experiences as poor, but my study Higher Education After #MeToo sheds light on some of the reasons why reporting is not working out for many students (and staff).

    At the time of data collection in 2020-21, a key reason was that – according to staff handling complaints – policies in this area were not yet fit for purpose. It’s therefore not surprising that reporting was seen as ineffective and sometimes harmful for many interviewees who had reported. Four years on, hopefully HEIs have made progress in devising and implementing policies in this area, so other reasons may be relevant.

    A further issue focused on by my study is that reporting processes for sexual misconduct in HE focus on sanctions against the reported party rather than prioritising safety or other needs of those who report. Many HEIs do now have processes for putting in place safety (“precautionary” or “interim”) measures to keep students safe after reporting.

    Risk assessment practices are developing. But these practices appear to be patchy and students (and staff) who report sexual harassment or violence are still not necessarily getting the support they need to ensure their safety from further harm. Not only this, but at the end of a process they are not usually told the actions that their university has taken as a result of the report.

    More generally, there’s a mismatch between why people report, and what is on offer from universities. Forthcoming analysis of the Power in the Academy data on staff-student sexual misconduct reveals that by the time a student gets to the point of reporting or disclosing sexual misconduct from faculty/staff to their HEI, the impacts are already being felt more severely than those who do not report.

    In laywoman’s terms, if people report staff sexual misconduct, it’s likely to be having a really bad impact on their lives and/or studies. Reasons for reporting are usually to protect oneself and others and to be able to continue in work/study. So it’s crucial that when HEIs receive reports, they are able to take immediate steps to support students’ safety. If HEIs are listening to students – including the voices of those who have reported or disclosed to their institution – then this is what they’ll be hearing.

    Staff-student relationships

    The survey also provides new data on staff-student intimate relationships. The survey details that:

    By intimate relationship we mean any relationship that includes: physical intimacy, including one-off or repeated sexual activity; romantic or emotional intimacy; and/or financial dependency. This includes both in person and online, or via digital devices.

    From this sample, 1.5 per cent of respondents stated that they had been in such a relationship with a staff member. Of those who had been involved in a relationship, a staggering 68.8 per cent of respondents said that the university or college staff member(s) had been involved with their education or assessment.

    Even as someone who researches within this area, I’m surprised by how high both these figures are. While not all students who enter into such relationships or connections will be harmed, for some, deep harms can be caused. While a much higher proportion of students who reported “intimate relationships” with staff members were 21 or over, age of the student is no barrier to such harms.

    It’s worth revisiting some of the findings from 2024 to give some context to these points. In the 3000-strong representative survey from the OfS, a third of those in relationships with staff said they felt pressure to begin, continue or take the relationship further than they wanted because they were worried that refusing would negatively impact them, their studies or career in some way.

    Even consensual relationships led to problems when the relationship broke up. My research has described the ways in which students can be targeted for “grooming” and “boundary-blurring” behaviours from staff. These questions on coercion from the 2024 survey were omitted from the shorter 2025 version – but assuming such patterns of coercion are present in the current dataset, these findings are extremely concerning.

    They give strong support to OfS’ approach towards staff-student relationships in the new condition of registration. OfS has required HEIs to take “one or more steps which could make a significant and credible difference in protecting students from any actual or potential conflict of interest and/or abuse of power.”

    Such a step could include a ban on intimate personal relationships between relevant staff and students but HEIs may instead chose to propose other ways to protect students from abuses of power from staff. While most HEIs appear to be implementing partial bans on such relationships, some have chosen not to.

    Nevertheless, all HEIs should take steps to clarify appropriate professional boundaries between staff and students – which, as my research shows, students themselves overwhelmingly want.

    Gaps in the data

    The publication of this data is very welcome in contributing towards better understanding patterns of victimisation among students in HE. It’s crucial to position this dataset within the context of an emerging body of research in this area – both the OfS’ previous publications, but also academic studies as outlined above – in order to build up a more nuanced understanding of students’ experiences.

    Some of the gaps in the data can be filled from other studies, but others cannot. For example, while the new OfS regulatory condition E6 covers harassment on the basis of all protected characteristics, these survey findings focus only on sexual harassment and violence.

    National data on the prevalence of racial harassment or on harassment on the basis of gender reassignment would be particularly valuable in the current climate. This decision seems to be a political choice – sexual harassment and violence is a focus that both right- and left-wing voices can agree should be addressed as a matter of urgency, while it is more politically challenging (and therefore, important) to talk about racial harassment.

    The data also omits stalking and domestic abuse, which young people – including students – are more likely than other age groups to be subjected to, according to the Crime Survey of England and Wales. My own research found that 26 per cent of respondents in a study of gender-based violence at a university in England in 2020 had been subjected to psychological or physical violence from a partner.

    It does appear that despite the narrow focus on sexual harassment and violence from the OfS, many HEIs are taking a broader approach in their work, addressing domestic abuse and stalking, as well as technology-facilitated sexual abuse.

    Another gap in the data analysis report from the OfS is around international students. Last year’s pilot study of this survey included some important findings on their experiences. International students were less likely to have experienced sexual misconduct in general than UK-domiciled students, but more likely to have been involved in an intimate relationship with a member of staff at their university (2 per cent of international students in contrast with 1 per cent of UK students).

    They were also slightly more likely to state that a staff member had attempted to pressured them into a relationship. Their experiences of accessing support from their university were also poorer. These findings are important in relation to any new policies HEIs may be introducing on staff-student relationships: as international students appear to be more likely to be targeted, then communications around such policies need to be tailored to this group.

    We also know that the same groups who are more likely to be subjected to sexual violence/harassment are also more likely to experience more harassment/violence, i.e. a higher number of incidents. The new data from OfS do not report on how many incidents were experienced. Sexual harassment can be harmful as a one-off experience, but if someone is experiencing repeated harassment or unwanted sexual contact from one or more others in their university environment (and both staff and student perpetrators are likely to be carry out repeated behaviours), then this can have a very heavy impact on those targeted.

    The global context

    Too often, policy and debate in England on gender-based violence in higher education fails to learn from the global context. Government-led initiatives in Ireland and Australia show good practice that England could learn from.

    Ireland ran a national researcher-led survey of staff as well as students in 2021, due to be repeated in 2026, producing detailed data that is being used to inform national and cross-institutional interventions. Australia has carried out two national surveys – in 2017 and 2021 – and informed by the results has just passed legislation for a mandatory National Higher Education Code to Prevent and Respond to Gender-based Violence.

    The data published by OfS is much more limited than these studies from other contexts in its focus on third year undergraduate students only. It will be imperative to make sure that HEIs, OfS, government or other actors do not rely solely on this data – and future iterations of the survey – as a tool to direct policy, interventions or practice.

    Nevertheless, in the absence of more comprehensive studies, it adds another piece to the puzzle in understanding sexual harassment and violence in English HE.

    Source link

  • Colleges add sports to bring men, but it doesn’t always work

    Colleges add sports to bring men, but it doesn’t always work

    SALEM, Va. — On a hot and humid August morning in this southwestern Virginia town, football training camp is in full swing at Roanoke College. Players cheer as a receiver makes a leaping one-handed catch, and linemen sweat through blocking drills. Practice hums along like a well-oiled machine — yet this is the first day this team has practiced, ever.

    In fact, it’s the first day of practice for a Roanoke College varsity football team since 1942, when the college dropped football in the midst of World War II. 

    Roanoke is one of about a dozen schools that have added football programs in the last two years, with several more set to do so in 2026. They hope that having a team will increase enrollment, especially of men, whose ranks in college have been falling. Yet research consistently finds that while enrollment may spike initially, adding football does not produce long-term enrollment gains, or if it does, it is only for a few years.

    Roanoke’s president, Frank Shushok Jr., nonetheless believes that bringing back football – and the various spirit-raising activities that go with it — will attract more students, especially men. The small liberal arts college lost nearly 300 students between 2019 and 2022, and things were likely to get worse; the country’s population of 18-year-olds is about to decline and colleges everywhere are competing for students from a smaller pool.  

    “Do I think adding sports strategically is helping the college maintain its enrollment base? It absolutely has for us,” said Shushok.  “And it has in a time when men in particular aren’t going to college.”   

    Women outnumber men by about 60 percent to 40 percent at four-year colleges nationwide. Roanoke is a part of this trend. In 2019, the college had 1,125 women students and 817 men. 

    This fall, Roanoke will have 1,738 students altogether, about half men and half women. But the incoming freshman class is more than 55 percent male. 

    Sophomore linebacker Ethan Mapstone (26) jogs to the sideline at the end of a drill. Mapstone said he hadn’t planned to play college football until Roanoke head coach Bryan Stinespring recruited him. Credit: Miles MacClure for The Hechinger Report

    “The goal was that football would, in a couple of years, bring in at least an additional hundred students to the college,” said Curtis Campbell, Roanoke’s athletic director, as he observed the first day of practice. “We’ve got 97 kids out there on the field. So we’re already at the goal.”

    That number was 91 players as the season began, on Sept. 6 — and the Maroons won their first game, 23-7, over Virginia University of Lynchburg, on what Shushok called “a brilliant day full of community spirit and pride.”

    “Our students were out in force, side by side with community members spanning the generations,” he said via email. “In a time when we all need more to celebrate and opportunities to gather, it is easy to say our first football game since 1942 was both historic and invigorating.”

    Related: Interested in more news about colleges and universities? Subscribe to our free biweekly higher education newsletter.

    In the NCAA’s Division III, where Roanoke teams compete, athletic scholarships are not permitted. Athletes pay tuition or receive financial aid in the same way as other students, so adding football players will add revenue. For a small college, this can be significant. 

    Shushok said it’s not just about enrollment, though: He wants a livelier campus with more school spirit. Along with football, he started a marching band and a competitive cheerleading team. 

    “It plays to something that’s really important to 18- to 22-year-olds right now, which is a sense of belonging and spirit and excitement,” said Shushok, who came to Roanoke after being vice president of student affairs at Virginia Tech. Its Division I football team plays in a 65,000-seat stadium where fans jump up and down in unison to Metallica’s “Enter Sandman” as the players take the field. 

    The Maroons play in the local high school stadium — it seats 7,157 — and pay the city of Salem $2,850 per game in rent. The college raised $1.3 million from alumni and corporate sponsors to get the team up and running. 

    Roanoke College players gather on the sidelines during practice. Credit: Miles MacClure for The Hechinger Report

    Despite the research showing limited enrollment gains from adding football, colleges keep doing it. About a dozen have added or relaunched football programs in the last two years, including New England College in New Hampshire and the University of Texas Rio Grande Valley. Several more plan to add football in 2026, including Chicago State University and Azusa Pacific University in California. 

    Related: Universities and colleges search for ways to reverse the decline in the ranks of male students

    Calvin University in Michigan recently added football even though the student body was already half men, half women. The school wanted to broaden its overall appeal, Calvin Provost Noah Toly said, citing “school spirit, tradition, leadership development,” as well as the increased enrollment and “strengthened pipelines with feeder schools.”

    A 2024 University of Georgia study examined the effects of adding football on a school’s enrollment.

    “What you see is basically a one-year spike in male enrollment around guys who come to that school to help be part of starting up a team, but then that effect fades out over the next couple of years,” said Welch Suggs, an associate professor there and the lead author of that study. It found early modest enrollment spikes at colleges that added football compared to peers that didn’t and “statistically indistinguishable” differences after the first two years.

     ”What happens is that you have a substitution effect going on,” Suggs said. “There’s a population of students that really want to go to a football school; the football culture and everything with it really attracts some students. And there are others who really do not care one way or the other. And so I think what happens is that you are simply recruiting from different pools.” 

    Today, college leaders value any pool that includes men. Most prefer the campus population to be balanced between the sexes, and, considering the low number of male high school graduates going to college at all (39 percent in the last Pew survey), many worry about too few men being prepared for the future workforce.

    “ I don’t know that we have done a good job of articulating the value, and of programming to the particular needs that some of our young men are bringing in this moment,” Shushok said. “I think it’s pretty obvious, if you read the literature out there, that a lot of men are feeling undervalued and perhaps unseen in our culture.”

    Roanoke College President Frank Shushok Jr. in his office. Shushok said he brought football back to Roanoke to boost enrollment and create a livelier campus. Credit: Miles MacClure for The Hechinger Report

    Shushok said that Roanoke’s enrollment-building strategy was not centered on athletics. The college has also forged partnerships with local community colleges, guaranteeing students admission after they complete their associate degree, and has added nine new majors in 2024, including cannabis studies. Shushok pointed out that while freshman enrollment is down slightly this year, the community college program has produced a big increase in transfer students, from 65 in fall of 2024 to 91 this fall.

    About 55 percent of Roanoke’s students come from Virginia, but 75 of the football team’s 91 players are Virginians. The head coach, Bryan Stinespring, a 61-year-old Virginia native, knows that recruiting territory, having worked on the coaching staffs at several Virginia universities in his career. 

    Related: College Uncovered podcast: The Missing Men

    When Stinespring took over as head coach in 2023, hoping to inspire existing students and potential applicants to join his new team, there was no locker room, no shoulder pads or tackling dummies, no uniforms. 

    “The first set of recruits that came on campus, we ran down to Dick’s, got a football, went to the bookstore, got a sweatshirt,” said Stinespring, referring to a local Dick’s Sporting Goods store. “These kids came on campus and they had to believe in the vision that we had.” 

    Students bought into that vision; 61 of them joined a club team last fall, which played four exhibition games in preparation for this year. The community bought in, too; 9,200 fans showed up to the first club game, about 2,000 of them perched on a grassy hill overlooking the end zone. 

    Linebackers Connor Cox (40) and Austin Fisher (20) look on from the sidelines. Credit: Miles MacClure for The Hechinger Report

    Before Ethan Mapstone, a sophomore, committed to Roanoke, he was on the verge of giving up football, having sustained several injuries in high school. Then Stinespring called. 

    “I could hear by the tone of his voice how serious he meant everything he was saying,” said Mapstone, a 6-foot-1-inch linebacker from Virginia Beach. “I was on a visit a week later, committed two weeks later.”  

    To him, the football leaders at Roanoke seemed to be “a bunch of people on a mission ready to make something happen, and I think that’s what drove me in.” 

    Related: Even as women outpace men in graduating from college their earnings remain stuck 

    KJ Bratton, a junior wide receiver and transfer student from the University of Virginia, said he was drawn to Roanoke not because of football but because of the focus on individual attention in small classes. “You definitely get that one-on-one attention with your teacher, that definitely helps you in the long run,” said Bratton.  

    Jaden Davis, a sophomore wide receiver who was an honor roll student in high school, said, “ The staff, they care about all the students. They’ll pull you aside, they know you personally, they’ll send you emails, invite you to office hours, and they just work with you to do the best you can.” 

    Not everyone was on board with football returning to the college when the plan was first announced. Some faculty and administrators were concerned football would change the campus culture, said Campbell, the athletic director. 

    Sophomore wide receiver Jaden Davis poses for a photograph before the first practice of the season. Davis said the individual attention he could get from professors is what attracted him to Roanoke. Credit: Miles MacClure for The Hechinger Report

    “There were just stereotypes about football players,” he said. “You know, they’re not smart, they’re troublemakers. They’re gonna do this and they’re gonna do that, be disruptive.” 

    But the stereotypes turned out to be unwarranted, he said. When the club team started, he said, “I got so many compliments last year from faculty and staff and campus security about how respectful and polite and nice our students were, how they behaved in the classroom, sitting in the front row and just being role models.”

    Payton Rigney, a junior who helps out with the football team, concurred. “All the professors like them because they say ‘yes, sir’ and ‘no, ma’am,’” she said.

    Like most Division III athletes, the Roanoke players know that they have little chance of making football a professional career. Mapstone said there are other reasons to embrace the sport. 

    “It’s a great blessing to be able to do what we do,” he said. “There’s many people that I speak to who are older and, and they reminisce about the times that they had to play football, and it’s very limited time.

    “And even though there’s not a future for it, I love it. It’s a Thursday, my only problem in the world is that there’s dew on my shoes.”  

    Contact editor Lawrie Mifflin at (212) 678-4078 or [email protected].

    This story about college football was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger higher education newsletter.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • SCOTUS Says NIH Doesn’t Have to Restore Canceled Grants

    SCOTUS Says NIH Doesn’t Have to Restore Canceled Grants

    iStock Editorial/Getty Images Plus

    The United States Supreme Court is allowing the National Institutes of Health to cut nearly $800 million in grants, though it left the door open for the researchers to seek relief elsewhere.

    In a 5-to-4 decision issued Thursday, the court paused a Massachusetts district court judge’s June decision to reinstate grants that were terminated because they didn’t align with the NIH’s new ideological priorities. Most of the canceled grants mentioned diversity, equity and inclusion goals; gender identity; COVID; and other topics the Trump administration has banned funding for. The district judge, in ruling against the administration, said he’d “never seen racial discrimination by the government like this.”

    Justice Amy Coney Barrett wrote that the district court “likely lacked jurisdiction to hear challenges to the grant terminations, which belong in the Court of Federal Claims,” with which Justices Clarence Thomas, Samuel Alito Jr., Neil Gorsuch and Brett Kavanaugh agreed.

    “The reason is straightforward,” Kavanaugh wrote. “The core of plaintiffs’ suit alleges that the government unlawfully terminated their grants. That is a breach of contract claim. And under the Tucker Act, such claims must be brought in the Court of Federal Claims, not federal district court.”

    The court’s emergency order came after more than a dozen Democratic attorneys general and groups representing university researchers challenged the terminations in federal court.

    “We are very disappointed by the Supreme Court’s ruling that our challenge to the sweeping termination of hundreds of critical biomedical research grants likely belongs in the Court of Federal Claims,” the American Civil Liberties Union, which is part of the legal team that is suing the NIH over the grant terminations, wrote in a statement Thursday evening. “This decision is a significant setback for public health. We are assessing our options but will work diligently to ensure that these unlawfully terminated grants continue to be restored.”

    Earlier this month, higher education associations and others urged the court to uphold the district court’s order, arguing that the terminations have “squandered” government resources and halted potentially lifesaving research.

    “The magnitude of NIH’s recent actions is unprecedented, and the agency’s abrupt shift from its longstanding commitments to scientific advancement has thrown the research community into disarray,” the groups wrote in an Aug. 1 brief. “This seismic shock to the NIH research landscape has had immediate and devastating effects, and granting a stay here will ensure that the reverberations will be felt for years to come.”

    Chief Justice John Roberts, who often sides with the conservative justices, joined liberal justices Ketanji Brown Jackson, Sonia Sotomayor and Elena Kagan in a dissent.

    “By today’s order, an evenly divided Court neuters judicial review of grant terminations by sending plaintiffs on a likely futile, multivenue quest for complete relief,” Jackson wrote. “Neither party to the case suggested this convoluted procedural outcome, and no prior court has held that the law requires it.”

    However, Barrett joined Roberts, Jackson, Sotomayor and Kagan in agreeing that the district court can review NIH’s reasoning for the terminations, and the justices kept in place a court order blocking the guidance that led to cancellations.

    “It is important to note that the Supreme Court declined to stay the District Court’s conclusion that the NIH’s directives were unreasonable and unlawful,” the ACLU said in a statement. “This means that NIH cannot terminate any research studies based on these unlawful directives.”

    Source link

  • NYC Teachers Believe Many Kids Are Gifted & Talented. Why Doesn’t the District? – The 74

    NYC Teachers Believe Many Kids Are Gifted & Talented. Why Doesn’t the District? – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    In 2020, around 3,500 incoming New York City kindergartners were deemed eligible for a public school gifted-and-talented program. In 2021, that number spiked up to over 10,000.

    What happened to nearly triple the number of identified “gifted” students in NYC in a single year?

    The difference was the screening method. In 2020, as in the dozen years beforehand, 4-year-olds were tested using the Otis-Lennon School Ability Test and the The Naglieri Nonverbal Ability Test. Those who scored above the 97th percentile were eligible to apply to citywide “accelerated” programs. Those who scored above the 90th were eligible for districtwide “enriched” classes.

    But in 2021, the process was changed. Now, instead of a test, students in public school pre-K programs qualify based on evaluations by their teachers, and there is no differentiation between those eligible for “accelerated” or “enriched” programs.

    In 2022, the last year for which figures are available, 9,227 students were deemed qualified to enter the gifted-and-talented placement lottery. But for the last decade there have only been about 2,500 spots citywide. Over 6,700 “gifted” students weren’t offered a seat. 

    That’s a shame, because, based on their overwhelming responses to the city’s G&T recommendation questionnaires, NYC teachers believe the vast majority of their young students – a statistically impressive 85% – would thrive doing work beyond what is offered in a regular classroom.

    When evaluating students for a G&T recommendation, teachers are asked, among other things, whether the child:

    • Is curious about new experiences, information, activities, and/or people;
    • Asks questions and communicates about the environment, people, events and/or everyday experiences in and out of the classroom;
    • Explores books alone and/or with other children;
    • Plays with objects and manipulatives via hands-on exploration in and outside of the classroom setting;
    • Engages in pretend/imaginary play;
    • Engages in artistic expression, e.g. music, dance, drawing, painting, cutting, and/or creating
    • Enjoys playing alone (enjoys own company) as well as with other children.

    This video illustrates how such “gifted” characteristics can be applied to … anybody.

    The Pygmalion Effect has demonstrated that when teachers are told their students are “gifted,” they treat them differently — and by the end of the year, those children are performing at a “gifted” level.

    Extrapolating that 85% of incoming kindergartners to the 70,000 or so kids enrolled at every grade level in NYC, that would mean there are 59,500 “gifted” students in each academic year, for a whopping total of 773,500 “gifted” K-12 students in the New York City public school system.

    And extending those calculations to the whole of the United States, then 85% of 74 million — i.e. 62,900,000 — 5- through 18-year-olds are capable of doing work above grade level. With that in mind, academic expectations could be raised across the board, and teachers would implement the new, higher standards filled with confidence that the majority of their students would rise to the occasion. NYC teachers have already said as much on their evaluations.

    What would happen if NYC were to provide a G&T seat to every student whom its own teachers deemed qualified? If it were subsequently confirmed that over 773,500 students in a 930,000-plus student school system are capable of doing “advanced” work, can parents, activists and everyone invested in making education the best it can possibly be for all expect to see such higher-level curriculum extended to all students — in NYC and, eventually, across America?

    As for the minority who weren’t recommended for “advanced” instruction, the combination of Pygmalion Effect and the benefits of mixed-ability classrooms should raise their proficiency, as well.

    Isn’t it worth a try?


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • For some students, home doesn’t feel like home

    For some students, home doesn’t feel like home

    In Britain, we can be oddly squeamish when talking about class, whether known or implied through a person’s accent, appearance, or behaviour.

    But not having an honest conversation with ourselves and our institutions about it is actively harming our students, especially the ones who are from the area where our institutions sit.

    I was one of a team of authors that published a report at the back end of 2024 exploring the role of social class and UK home region at Durham University. Our research, which was supported by the university, found that students from North East England had a lower sense of belonging than their peers.

    This is in comparison to students from other northern regions, the rest of the UK, and international students. And it is true even if they are from more advantaged backgrounds.

    I’ll say that again – students from North East England feel excluded from Durham University, which is in… North East England. This highlights that a problem at Durham University is not only class, but preconceived stereotypes based on how a person speaks, acts, or their family background.

    This article explains how we built our evidence base, and how the university responded, including by integrating our recommendations into the new Access and Participation Plan, and resourcing new staff roles and student-led activity.

    From anecdote to evidence

    The student-led report came out of the First Generation Scholars group in the Anthropology department in 2022.

    Having heard repeatedly the issues that first generation students were facing, and feeling it ourselves, we decided to move beyond anecdotal stories which were known in the university, and produce something concrete and legible which couldn’t be denied.

    We devised a survey and sent it to every student, with a 10 per cent response rate. Follow up focus groups were conducted to add additional context to the quantitative findings and ensure the voices of those who had been let down were heard.

    The findings were grouped into seven areas – overall sense of belonging at Durham, peer relationships, experiences in teaching and learning, college events and activities, college staff relationships, experiences in clubs and societies, and financial considerations.

    Across all these areas, social class had the strongest and most consistent effect. Students from less privileged backgrounds were more likely to feel ashamed of the way they speak, dress, and express themselves.

    They students felt targeted based on their background or personal characteristics – and said they were:

    …being told countless times by a flatmate that I seem the ‘most chavvy’ and continuously refer to Northerners as degenerates.

    …at a formal dinner, students laughed at my North-east accent, they asked if I lived in a pit village.

    The irony is that due to rising housing costs, many students really are being forced to live in pit villages.

    These instances weren’t only present in peer interactions – but also took place in the teaching and learning spaces. One student said that during a lecture, the lecturer mentioned that they couldn’t understand what the IT staff member was saying due to his North East accent – which was the same as the students’.

    Another noted that their peers were “sniggering when I made a comment in a tutorial.” Comments like these have led to students self-silencing during classes and, in some cases, changing their accents entirely to avoid stigma.

    Anecdotally, I’ve heard students say that their families laugh when they hear their new accent. If we are implicitly telling students that they have to change who they are in their own region, their own city, amongst their own family in order to fit in, we are telling them that they are not safe to be authentically themselves. That message lingers beyond university.

    The report notes that other groups of students also experienced exclusion. These included women, LGBTQ+ students, and students with a disability – although only disability came close to the magnitude of effects explained by social class and region.

    It should be noted that these are protected characteristics, while class and region are not. But there was also an interaction between these characteristics, class, and region. Women from less advantaged backgrounds from North East England had a worse time than their southern peers – which they reported as being due to their perceived intelligence and sexual availability. One North East female student stated,

    I was a bet for someone to sleep with at a college party because ‘Northern girls are easy.’

    Tackling the sense of exclusion

    The report also highlights instances of real connections for students. It was often in the simplest gestures, such as having a cup of tea with their college principal, porters saying hello in the corridor, or a lecturer confirming that they deserved to be at Durham, despite the student’s working-class background.

    We were worried that the university might be quick to dismiss, bury, or simply ignore the report. However, they’ve stepped up. The report has been used in the new Access and Participation Plan (APP), underpinning an intervention strategy to increase students’ sense of belonging through student-led, funded activities.

    That builds on the creation of new, instrumental staffing positions. In discussions following the launch event for the report, there was a real buzz and momentum from colleagues who spotlighted the work they were doing in this area – but with an awareness that more needs to be done.

    A key issue is connecting this discrete but interconnected work. Many activities or initiatives are happening in silos within departments, colleges, faculties, or within the central university, with few outside those realms knowing about it.

    In a time when every university is tightening their belts, coordinating activities to share resources and successes seems like an easy win.

    It would be easy to dismiss the problem as unique to Durham – the university and its students have often been under fire for being elitist, tone deaf, or exclusionary. But it’s likely that students at other institutions are facing similar barriers, comments, and slights.

    I’ve spoken to enough colleagues in SUs to know that it isn’t just a Durham problem, not even just a Russell Group problem. There will be those who are afraid of what they might find if they turn over that particular stone, actually having a good look at how social class impacts students belonging.

    But I’d argue it’s a positive thing to do. Bringing it into the light and confronting and acknowledging the problem means that we can move forward to make our students’ lives better.

    Read the full report here, including recommendations, and the university’s comments.

    Source link

  • The Trump administration doesn’t need to go to Brazil to find government censorship. It can look in a mirror.

    The Trump administration doesn’t need to go to Brazil to find government censorship. It can look in a mirror.

    Alexandre de Moraes, the polarizing Brazilian Supreme Court Justice, is no friend to free speech. Though he is a popular figure within Brazil among those who see him as a protector of democracy, he has aggressively wielded his authority to censor, especially on the internet, with little transparency.

    From his position in Brazil’s Supreme Court, de Moraes has doggedly pursued wide swaths of speech and speakers off and on the internet, as well as the tech companies hosting them. In a highly public incident last year, Brazil blocked X — and even threatened VPN users accessing it with massive fines — over the company’s noncompliance with de Moraes’ orders.

    The actions of de Moraes, and Brazil’s Supreme Court more broadly, have repeatedly drawn the ire of the Trump administration. But chief among President Trump’s grievances is the prosecution of his political ally, former Brazilian President Jair Bolsonaro, who is accused of attempting a coup to overturn his 2022 election loss to President Luiz Inácio Lula da Silva.

    How has the Trump administration responded?

    Last month, the administration enacted a series of punishments against Brazil’s leadership and de Moraes specifically. In a July 30 executive order, Trump announced tariffs and other sanctions due to Brazil’s prosecution of Bolsonaro and other actions that “conflict with and threaten the policy of the United States to promote free speech and free and fair elections at home and abroad.” The order follows Trump’s weeks-earlier threat of tariffs over the “witch hunt” against Bolsonaro.

    Secretary of State Marco Rubio also revoked the visas of de Moraes “and his allies on the court” and their families. And under the Global Magnitsky Human Rights Accountability Act, usually reserved for the most serious human rights abuses, the Department of the Treasury announced sanctions targeting any of de Moraes’s U.S. assets. 

    Unprincipled, partisan free speech advocacy is no free speech advocacy at all

    There is plenty to debate about how to best protect free speech on the global internet, and around the world more generally, and what actions the United States can take in its defense. But, even though Brazil’s adversarial relationship with free expression is deeply alarming, it’s impossible to ignore the incongruity of the Trump administration putting itself in the position of diagnosing and treating government censorship.

    Physician, heal thyself. 

    The opening months of Trump’s second term in office have offered a nonstop, headspinning bonanza of violations, threatened and enacted, against Americans’ First Amendment rights. 

    I write regularly in the Free Speech Dispatch about the myriad threats to freedom of expression, from Russia to the UK to India to Hong Kong. It’s painfully, brutally clear we need leadership to push back against the wave of global repression that threatens all of our rights. But that leadership must practice what it preaches and avoid simply using concerns about free speech as a pretext to fight partisan political battles. On both counts, this administration has failed. 

    You will make no converts to the free speech cause by proving right the critics who suspect its advocates are guided by partisan aims, not principled ones. Instead, you will breed cynicism and harm the very cause you claim to support.

    This same posturing marred Vice President JD Vance’s objections to European censorship, an ugly trend that’s in dire need of principled critiques. Instead, Vance claimed that under Trump, the “new sheriff in town,” the administration “may disagree with your views, but we will fight to defend your right to offer them in the public square.” 

    Well, unless you’re CBS/Paramountlaw firmsThe Wall Street Journalthe Washington Commanders, CNNThe New York TimesprotestersMedia MattersJames Comey’s seashells, “propaganda,” academic and medical journalspollster Ann Selzer and The Des Moines RegisterThe Associated Pressinternational studentsflag burnersHarvardColumbia, or the many other universities and academics under threat.

    The ugly reality is that the U.S. is rapidly ceding its moral authority to criticize foreign governments’ censorship, like that emanating from Brazil’s Supreme Court, when its own president and agencies are gleefully flouting the First Amendment and free speech principles day in and day out.

    Perhaps most baffling was the administration’s objection to the Brazilian government’s targeting of Paulo Figueiredo, a Brazilian journalist, and “U.S. resident, for speech he made on U.S. soil.” Readers may also be able to think of some more government officials targeting immigrants legally residing in the U.S. for protected speech made on U.S. soil — and they’re doing so from our White House and State Department, not thousands of miles away. 

    Global censorship is a real challenge, and it’s only getting worse. But until the U.S. removes the censorial beam from its own eye, we may find that other nations are unmoved by our criticisms and cures. Or, they may perhaps even be interested in doling them out to us. 

    Source link