Author: admin

  • Love, loyalty, and liberty: ASU alumni unite to defend free speech

    Love, loyalty, and liberty: ASU alumni unite to defend free speech

    Late last year, a group of Arizona State University alumni gathered on the rooftop of the Canopy Hotel — high enough to see the headlights snake through the city of Tempe, but low enough to feel the pounding bass line of Mill Avenue’s nightlife. 

    Though the setting was casual, the conversation was anything but. A simple question had brought them together: What obligations do alumni have to their alma mater? 

    For most graduates, the answer is simple. Come back for Homecoming, buy the sweatshirt, scribble a check when the fundraising office calls. Thanks for your generosity! Click

    But for the assembled Sun Devils — spanning the classes of ’85 to ’24 — their connection to ASU is more than rahrah nostalgia. They feel a duty to protect what made the university worth attending in the first place. 

    And so, that evening, they formed ASU Alumni for Free Speech. Their mission? “To promote and strengthen free expression, academic freedom, and viewpoint diversity, both on campus and throughout the global ASU community.” 

    The group’s inaugural chairman is Joe Pitts, ASU class of ’23 — whose beard, broad shoulders, and sage intellect belie his youth. For him, alumni should be more than mere spectators or “walking check books,” as he puts it, “endlessly giving and expecting little in return.” Instead, they should be invested stakeholders. 

    Pitts says it’s now fashionable to view a college diploma as little more than a fancy receipt. People think, I paid my tuition, endured the required courses, and behold: I’m credentialed! A neat little market transaction — no lingering ties, no ongoing investment.

    But this mindset, Pitts argues, is both morally bankrupt and pragmatically wrong-headed. As a practical matter, he says, “the value of your degree is tied to the reputation of your school — if your alma mater improves over time, your degree becomes more prestigious. If it declines, so does the respect it commands.” 

    And in the cutthroat world of status-signaling and social capital that matters — a lot. 

    ASU alumni have already petitioned the Arizona Board of Regents, urging them to adopt a policy of institutional neutrality, which would prevent the university from taking positions on current political issues and weighing in on the cause-du-jour.

    As a moral matter, “spending four years (or even more) at a university inevitably shapes you in some way,” Pitts says. “And in most cases, it’s for the better — even if we don’t exactly realize it at the time.” Think about it: how many unexpected friendships or serendipitous moments of clarity, insight, rebellion, and revelation do we owe our alma mater? 

    To discard that connection the moment you graduate — to treat it like an expired gym membership — isn’t just ungrateful. It’s a rejection of one’s own formation.

    But beyond these considerations, Pitts insists that what united them on the Canopy Hotel rooftop last year was — love, actually. Not the saccharine, Hallmark kind or the fleeting thrill of a Tinder rendezvous, but the sort of love that drives men to build cathedrals and forge legacies.

    Echoing St. Thomas Aquinas, Pitts says, “We love ASU, and to love is to will the good of the other — not to sit idly by.” And what is the good? It’s a campus where students unapologetically speak their minds; where professors dare to probe the perilous and the provocative; where administrators resist the temptation to do their best Big Brother impression! 

    Fortunately for ASU Alumni for Free Speech, their alma mater is already a national leader when it comes to free speech on campus — though, as Pitts notes, that’s “a damn low bar.”

    ASU ranks 14 out of 251 schools in FIRE’s 2025 College Free Speech Rankings, and has maintained a “green light” rating from FIRE since 2011, meaning its official policies don’t seriously imperil free expression. In 2018, ASU adopted the Chicago principles, committing to the “free, robust, and uninhibited sharing of ideas” on campus.

    The university didn’t stop there. This spring, ASU will launch a Center for Free Speech alongside an annual Free Speech Forum. 

    But despite these credentials, the specter of censorship still lingers at ASU, and the numbers tell the tale:

    • 68% of ASU students believe shouting down a speaker is at least rarely acceptable.
    • 35% believe violence can sometimes be justified to silence speech.
    • 37% self-censor at least once or twice a month. 
    • Over one-third of surveyed ASU faculty admit to self-censorship in their writing.

    And so — like the cavalry cresting the hill — ASU Alumni for Free Speech arrives just in time.

    “When controversy inevitably arises on a campus of 100,000 students,” Pitts argues, “the defense of free expression shouldn’t be left solely to outside organizations or political bodies. Instead, those speaking up should be people who genuinely care about ASU and have its best interests at heart.”

    ASU Alumni for Free Speech aims to be that voice. “In the long run, we want to have a seat at the table,” Pitts explains. “We want to build relationships not just with the ASU administration but also with the Arizona Board of Regents.”

    Along with FIRE, ASU alumni have already petitioned the Arizona Board of Regents, urging them to adopt a policy of institutional neutrality, which would prevent the university from taking positions on current political issues and weighing in on the cause-du-jour.

    SIGN THE PETITION TO ADOPT INSTITUTIONAL NEUTRALITY!

    Pitts and the rest of ASU Alumni for Free Speech are tired of playing cheerleader. They’re here to ensure that ASU flourishes not just today, but for every Sun Devil yet to step onto Palm Walk for the first time.

    “Sometimes that may look like applause,” Pitts says. “Other times, that may look like criticism.” 

    In either case, he insists, it’s an act of love.


    If you’re ready to join ASU Alumni for Free Speech, or if you’re interested in forming a free speech alumni alliance at your alma mater, contact Bobby Ramkissoon at [email protected]. We’ll connect you with like-minded alumni and offer guidance on how to effectively protect free speech and academic freedom for all. 

    Source link

  • Wave of state-level AI bills raise First Amendment problems

    Wave of state-level AI bills raise First Amendment problems

    AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country.  Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.

    On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.

    Constitutional background: Watermarking and other compelled disclosure of AI use

    We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software. 

    Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.

    To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit  reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

    There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of uncontroversial, purely factual information to prevent consumer deception. (For example, under this principle, the D.C. Circuit allowed federal regulators to require disclosure of country-of-origin information about meat products.) 

    But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.

    AI disclosure and watermarking bills

    This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s HB 1170, Massachusetts’s HD 1861, New York’s SB 934, and Texas’s SB 668.

    At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer. 

    For example, FIRE recently testified against Washington’s HB 1170, which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.

    Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. 

    In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.

    Constitutional background: Categorical ‘deepfake’ regulations

    A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.

    Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception would be easily abused to suppress dissent and criticism.

    There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.

    Misinformation versus disinformation, explained

    Issue Pages

    Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.


    Read More

    Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.

    Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.

    AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of Mark Zuckerberg discussing his control over user data — and for parody, as seen in the Donald Trump pizza commercial and the TikTok account dedicated to satirizing Tom Cruise. In the realm of art and entertainment, the Dalí Museum used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young Luke Skywalker. Deepfakes have even been used for education and outreach, with a deepfake of David Beckham raising awareness about malaria.

    These examples should not be taken to suggest that AI is always a positive force for shaping public discourse. It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.

    Categorical deepfake prohibition bills

    Bills with categorical deepfake prohibitions include North Dakota’s HB 1320 and Kentucky’s HB 21.

    North Dakota’s HB 1320, a failed bill that FIRE opposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.

    This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.

    Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.

    HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.

    Constitutional background: Election-related AI regulations

    Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.

    Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.

    As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.

    This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.

    Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and blocked a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.

    Election-related AI bills

    Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s SB 2, Arkansas’s HB 1041, Illinois’s SB 150, Maryland’s HB 525, Massachusetts’s HD 3373, Mississippi’s SB 2642, Missouri’s HB 673, Montana’s SB 25, Nebraska’s LB 615, New York’s A 235, South Carolina’s H 3517, Vermont’s S 23, and Virginia’s SB 775.

    For example, S 23, a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”

    Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season. 

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.

    Instead of voter outreach, political campaigning would turn into lawfare.

    Concluding Thoughts

    That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!



    Source link

  • Howard University Makes History as First HBCU to Achieve Top Research Status

    Howard University Makes History as First HBCU to Achieve Top Research Status

    In a groundbreaking achievement that marks a significant milestone for historically Black colleges and universities (HBCUs), Howard University has become the first HBCU to receive the prestigious Research One (R1) Carnegie Classification, placing it among the nation’s most elite research institutions.

    The announcement from the American Council of Education (ACE) on Thursday, recognizes Howard’s designation as an institution of “very high research spending and doctorate production,” a status that fewer than 150 universities nationwide have achieved. This accomplishment not only highlights Howard’s commitment to academic excellence but also represents a historic moment in the evolution of HBCUs in American higher education.

    According to ACE’s stringent criteria, universities must demonstrate exceptional research capabilities through substantial financial investment and doctoral program success. The minimum requirements include at least $50 million in annual research spending and the production of at least 70 research doctorates. Howard University has significantly surpassed these thresholds, showcasing its commitment to advancing knowledge and fostering innovation.

    Dr. Bruce A. Jones, Howard University’s senior vice president for research, provided specific details about the university’s achievements. “In Fiscal Year 2023, the most recent evaluation year in the classification cycle, the University’s productivity was significantly higher than the R1 base criteria, recording just under $85 million in research expenditures and awarding 96 doctorates in an array of fields,” Jones said. “This includes the highest number of doctorates awarded to Black students at any college or university in America.”

    The impact of such a designation has broader implications beyond Howard, said Dr. Robert T. Palmer, chair and professor in the Department of Educational Leadership and Policy Studies at the university.

    “Howard reaching R1 status is phenomenal. This status will help Howard to attract more highly competitive research grants and talented faculty and students,” said Palmer, who added that the university’s status as an R1 will also help to position itself as a premier institution “and help to amplify the great work being done by faculty, staff, and students, alumni”

    Palmer noted that there are other HBCUs, including his alma mater, Morgan State University that is currently seeking R1 status.

    “It would be great for HBCUs seeking R1 status to form a coalition and work collectively to support each other towards this goal,” he added.

    University President Dr. Ben Vinson III emphasized the broader implications of this achievement for both Howard and the communities it serves.

    “Howard University’s achievement of R1 status demonstrates our research capacity and reaffirms our deep commitment to tackling society’s most pressing questions through cutting-edge scholarship and technological innovation,” Vinson said. “As a leader in the evolution of next generation HBCUs, we are dedicated to ensuring that the benefits of discovery and progress reach all communities, including those historically overlooked and underrepresented.”

    Vinson noted that the university’s research portfolio showcases its comprehensive approach to addressing critical societal challenges. For example, Howard hosts one of only fifteen U.S. Department of Defense University Affiliated Research Centers (UARC) in the nation, focusing on tactical autonomy, human-machine teaming, and artificial intelligence through its Research Institute for Tactical Autonomy.

    In the medical field, Howard’s pioneering spirit is evident in its Center for Sickle Cell Disease, which was the first center in the nation devoted to studying and treating the disease. The university’s Cancer Center holds the distinction of being the only such facility at an HBCU providing comprehensive cancer treatment services while training future oncology professionals and researchers.

    The university’s commitment to preserving and studying Black history and culture is exemplified by the Moorland-Spingarn Research Center, which stands as the nation’s largest and most comprehensive repository of materials on the global Black experience. Additionally, Howard’s Center for African Studies holds the unique position of being the only comprehensive National Resource Center at an HBCU, as designated by the U.S. Department of Education.

    Higher education experts point out that Howard’s R1 designation represents not just an achievement for Howard University but a significant advancement for the entire HBCU community, potentially paving the way for other institutions to follow. As Howard continues to expand its research capabilities and influence, its impact on American higher education and scientific advancement promises to grow even stronger.

    “I think it’s incredibly exciting that Howard University — a powerhouse for decades in research — is being recognized as a Research 1 institution,” said Dr. Marybeth Gasman, who is the Samuel DeWitt Proctor Endowed Chair in Education and University Distinguished Professor at Rutgers University. An expert on HBCUs, Gasman added that the important research contributions across disciplines at Howard have significantly impacted students, communities (regional, national, and international), and leaders.

    “I’m excited to see what the institution does to build on this recognition as it progresses,” she said. “As a Research 1, it will be vital to ensure that all tenure-track faculty are supported through reduced course loads (4 courses a year max), research start-up funds across the disciplines, ample conference travel funding, and that Ph.D. students are supported with fully funded fellowships and assistantships.”

    Source link

  • Education nominee McMahon says she supports calls to dismantle the agency but that funding wouldn’t be affected

    Education nominee McMahon says she supports calls to dismantle the agency but that funding wouldn’t be affected

    Linda McMahon said she stands firmly behind President Donald Trump’s calls to gut the U.S. Department of Education at her confirmation hearing to lead the department.

    But she promised to work with Congress to do so — acknowledging some limits on the president’s authority as Trump seeks to remake the government through executive orders. And she tried to reassure teachers and parents that any changes would not jeopardize billions in federal funding that flows to high-poverty schools, special education services, and low-income college students.

    “We’d like to do this right,” McMahon said. “It is not the president’s goal to defund the programs, it is only to have it operate more efficiently.”

    Trump has called the Education Department a “con job” and said that McMahon, a former professional wrestling executive and billionaire Republican donor, should work to put herself out of a job. McMahon called this rhetoric “fervor” for change.

    The Trump administration’s chaotic approach to spending cuts so far raise questions about whether McMahon’s statements — an effort to neutralize the most significant criticism of plans to get rid of the Education Department — will prove true over time.

    Thursday’s hearing before the Senate Committee on Health, Education, Labor, and Pensions, punctuated by occasional protests, served as a referendum of sorts on the value of the Education Department. Republicans said it had saddled schools with red tape without improving student outcomes. Democrats said the department protects students’ civil rights and funds essential services.

    Democrats also pressed McMahon on Trump’s threats to withhold federal funding from schools that violate his executive orders and on the details of a potential reorganization — questions that McMahon largely deflected as ones she could better answer after she takes office.

    “It’s almost like we’re being subjected to a very elegant gaslighting here,” said Sen. Maggie Hassan, a Democrat from New Hampshire.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    Even as Trump has called for the Education Department to be eliminated and schooling to be “returned to the states,” he’s also sought to expand its mission with executive orders threatening the funding of schools that employ diversity, equity, and inclusion practices or teach that racism and discrimination were part of America’s founding. The federal government is barred by law from setting local curriculum, as Republican Sen. Lisa Murkowski of Alaska pointed out during the hearing.

    In a tense exchange, Sen. Chris Murphy, a Democrat from Connecticut who’s championed school desegregation and diversity efforts in education, asked McMahon how schools would know if they were running a program that violates Trump’s executive order seeking to root out “radical indoctrination” in K-12 schools. Many schools have no idea what’s allowed, Murphy said, because the order doesn’t clearly define what’s prohibited.

    McMahon said in her view, celebrating Martin Luther King Jr. Day and Black History Month should be permitted, after Murphy noted that U.S. Department of Defense schools would no longer celebrate Black History Month in response to Trump’s order.

    But McMahon would not say that running affinity groups for students from certain racial or ethnic backgrounds, such as a Black engineers club or an after-school club for Vietnamese American students, was permitted. She also would not say whether schools might put their federal funding at risk by teaching an African American history class or other ethnic studies program.

    “That’s pretty chilling,” Murphy said. “You’re going to have a lot of educators and a lot of principals and administrators scrambling right now.”

    Later in the confirmation hearing, McMahon agreed schools should teach “the good, the bad, and the ugly” parts of U.S. history, and that it’s up to states, not the Department of Education, to establish curriculum.

    McMahon’s record on DEI has sometimes been at odds with the Trump administration. She backed diversity issues when she served on the Connecticut State Board of Education, the Washington Post reported.

    During her hearing, McMahon said DEI programs are “tough,” because while they’re put in place to promote diversity and inclusion, they can have the opposite effect. She pointed to examples of Black and Hispanic students attending separate graduation ceremonies — though those are typically held to celebrate the achievements of students of color, not to isolate them.

    Related: What might happen if the Education Department were closed?

    McMahon told the committee that many Americans are experiencing an educational system in decline — she pointed to sobering national test scores, crime on college campuses, and high youth suicide rates — and said it was time for a renewed focus on teaching reading, math, and “true history.”

    “In many cases, our wounds are caused by the excessive consolidation of power in our federal education establishment,” she said. “So what’s the remedy? Fund education freedom, not government-run systems. Listen to parents, not politicians. Build up careers, not college debt. Empower states, not special interests. Invest in teachers, not Washington bureaucrats.”

    Republican Senators reiterated these themes, arguing that bureaucrats in Washington had had their chance and that it was time for a new approach.

    They asked McMahon about Trump administration priorities such as expanding school choice, including private school vouchers, and interpreting Title IX to bar transgender students from restrooms and sports teams aligned with their gender identities.

    McMahon said she was “happy” to see the Biden administration’s rules on Title IX vacated, and she supported withholding federal funds from colleges that did not comply with the Trump administration’s interpretation of the law.

    Related: Trump wants to shake up education. What that could mean for a charter school started by a GOP senator’s wife

    Teachers unions and other critics of McMahon have said she lacks the proper experience to lead the Education Department, though McMahon and others have pointed to her time serving on the Connecticut State Board of Education, as a trustee of Sacred Heart University, and her role as chair of the America First Policy Institute, where she advocated for private school choice, apprenticeships, and career education.

    McMahon also ran the Small Business Administration in Trump’s first administration. Her understanding of the federal bureaucracy is an asset, supporters say.

    Sen. Tim Scott, a Republican from South Carolina, said McMahon’s background made her uniquely suited to tackle the pressing challenges facing the American education system today.

    Related: What education could look like under Trump and Vance 

    McMahon said multiple times that parents of children with disabilities should not worry about federal funding being cut for the Individuals with Disabilities Education Act, though she said it was possible that the U.S. Department of Health and Human Services would administer the money instead of the Education Department.

    But it appeared that McMahon had limited knowledge of the rights outlined in IDEA, the landmark civil rights law that protects students with disabilities. And she said it was possible that civil rights enforcement — a large portion of which is related to complaints about children with disabilities not getting the services to which they’re entitled — would move to the U.S. Department of Justice.

    Dismantling the education department by moving key functions to other departments is a tenet of Project 2025, the playbook the conservative Heritage Foundation developed for a second Trump administration. Most of these functions are mandated in federal law, and moving them would require congressional approval.

    McMahon struggled to articulate the goals of IDEA beyond saying students would be taken care of and get the assistance and technology they need.

    “There is a reason that the Department of Education and IDEA exist, and it is because educating kids with disabilities can be really hard and it takes the national commitment to get it done,” Hassan, the New Hampshire senator, said. “That’s why so many people are so concerned about this proposal to eliminate the department. Because they think kids will once again be shoved aside, and especially kids with disabilities.”

    McMahon also could not name any requirements of the Every Student Succeeds Act, the federal law that replaced No Child Left Behind. ESSA requires states to identify low-performing schools and intervene to improve student learning, but it gives states more flexibility in how they do so than the previous law.

    McMahon seemed open to reversing some of the cuts enacted by the U.S. DOGE Service, the cost-cutting initiative led by billionaire Elon Musk.

    She said, if confirmed, she would look into whether staff who’d been placed on administrative leave — including some who investigate civil rights complaints — should return. She also said she’d assess the programs that were cut when DOGE terminated 89 contracts at the Institute of Education Sciences and 29 training grants.

    Sen. Susan Collins, a Republican from Maine, said her office had heard from a former teacher who developed an intensive tutoring strategy that was used in a dozen schools in the state. The teacher had a pending grant application to evaluate the program and its effect on student outcomes, and the teacher worried it would be in jeopardy. Collins asked if the department should keep collecting that kind of data so it could help states determine what’s working for kids.

    “I’m not sure yet what the impact of all of those programs are,” McMahon said. “There are many worthwhile programs that we should keep, but I’m not yet apprised of them.”

    The Senate education committee is scheduled to vote on McMahon’s confirmation on Feb. 20.

    This story was produced by Chalkbeat and reprinted with permission. 

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Effect of Institutional Autonomy on Academic Freedom in Higher Education Institutions in Ghana

    Effect of Institutional Autonomy on Academic Freedom in Higher Education Institutions in Ghana

    By Mohammed Bashiru and Professor Cai Yonghong

    Introduction

    The idea of institutional autonomy in higher education institutions (HEIs) naturally comes up when discussing academic freedom. These two ideas are connected, and the simplest way to define how they relate to one another is that they are intertwined through several procedures and agreements that link people, institutions, the state, and civil society. Academic freedom and institutional autonomy cannot be compared, but they also cannot be separated and the loss of one diminishes the other. Protecting academic freedom and institutional autonomy is viewed by academics as a crucial requirement for a successful HEI. For instance, institutional autonomy and academic freedom are widely acknowledged as essential for the optimization of university operations in most African nations.

    How does institutional autonomy influence academic freedom in higher education institutions in Ghana?

    In some countries, universities have been subject to government control, with appointments and administrative positions influenced by political interests, leading to violations of academic autonomy and freedom. Autonomy is a crucial element in safeguarding academic freedom, which requires universities to uphold the academic freedom of their community and for the state to respect the right to science of the broader community. Universities offer the necessary space for the exercise of academic freedom, and thus, institutional autonomy is necessary for its preservation. The violation of institutional autonomy undermines not only academic freedom but also the pillars of self-governance, tenure, and individual rights and freedoms of academics and students. Universities should be self-governed by an academic community to uphold academic freedom, which allows for unrestricted advancement of scientific knowledge through critical thinking, without external limitations.

    How does corporate governance affect the relationship between institutional autonomy and academic freedom?

    Corporate governance mechanisms, such as board diversity, board independence, transparency, and accountability, can ensure that the interests of various stakeholders, including students, faculty, and the government, are represented and balanced. The incorporation of corporate governance into academia introduces a set of values and priorities that can restrict the traditional autonomy and academic freedom that define a self-governing profession. This growing tension has led to concerns about the erosion of academia’s self-governance, with calls for policies that safeguard academic independence and uphold the values of intellectual freedom and collaboration that are foundational to higher education institutions. Nonetheless, promoting efficient corporate governance, higher education institutions can help safeguard academic freedom and institutional autonomy, despite external pressures.

    Is there a significant difference between the perceptions of males and females regarding institutional autonomy, academic freedom, and their relationship?

    The appointment process for university staff varies across countries, but it is essential that non-academic factors such as gender, ethnicity, or interests do not influence the selection of qualified individuals who are necessary for the institution’s quality. Unfortunately, studies indicate that women are often underrepresented in leadership positions and decision-making processes related to academic freedom and institutional autonomy. This underrepresentation can perpetuate biases and lead to a lack of diversity in decision-making. One solution to address these disparities is to examine gender as a factor of difference to identify areas for improvement and promote gender equality in decision-making processes. By promoting diversity and inclusivity, academic institutions can create a more equitable environment that protects institutional autonomy and promotes academic freedom for everyone, regardless of their gender.

    Methodology and Conceptual framework

    The quantitative and predictive nature of the investigation necessitated the use of an explanatory research design. Because it enabled the us to establish a clear causal relationship between the exogenous and endogenous latent variables, the explanatory study design was chosen. The simple random sample technique was utilised to collect data from an online survey administered to 128 academicians from chosen Ghanaian universities.

    The conceptual framework, explaining the interrelationships among the constructs in the context of the study is presented. The formulation of the conceptual model was influenced by the nature of proposed research questions backed by the supporting theories purported in the context of the study.

    Conclusions and Implications

    Institutional autonomy significantly predicts academic freedom at a strong level within higher education institutions in Ghana. Corporate governance can restrict academic freedom when its directed to yield immediate financial or marketable benefits but in this study it plays a key role in transmitting the effect of institutional autonomy. Additionally, there is a significant difference in perception between females and males concerning the institutional autonomy – academic freedom predictive relationship. Practically, higher education institutions, particularly in Ghana, should strive to maintain a level of autonomy while also ensuring that academic freedom is respected and protected. This can be achieved through decentralized governance structures that allow for greater participation of academics in decision-making processes. Institutions should actively engage stakeholders, including academics, in discussions and decisions related to institutional autonomy and academic freedom. This will ensure that diverse perspectives are considered in policy development.

    This blog is based on an article published in Policy Reviews in Higher Education (online 02 January 2025) https://www.tandfonline.com/doi/full/10.1080/23322969.2024.2444609

    Bashiru Mohammed is a final year PhD student at the faculty of Education, Beijing Normal University. He also holds Masters in Higher education and students’ affairs from the same university. His research interest includes School management and administration, TVET education and skills development.

    Professor Cai Yonghong is a professor at Faculty of Education, Beijing Normal University. She has published many articles and presided over several domestic and international educational projects and written several government consultant reports. Her research interest includes teacher innovation, teacher expertise, teacher’s salary, and school management.

    References

    AAU, (2001). ‘Declaration on the African University in the Third Millennium’.

    Akpan, K. P., & Amadi, G. (2017). University autonomy and academic freedom in Nigeria: A theoretical overview. International Journal of Academic Research and Development,

    Altbach, P. G. (2001). Academic freedom: International realities and challenges. Higher Education,

    Aslam, S., & Joshith, V. (2019). Higher Education Commission of India Act 2018: A Critical Analysis of the Policy in the Context of Institutional Autonomy.

    Becker, J. M., Cheah, J. H., Gholamzade, R., Ringle, C. M., & Sarstedt, M. (2023). PLS-SEM’s most wanted guidance.

    Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded
    assessment of PLS-SEM in information systems research. Industrial management & data
    systems,

    Lippa, R. A. (2005). Gender, nature, and nurture. Routledge.

    Lock, I., & Seele, P. (2016). CSR governance and departmental organization: A typology of best practices. Corporate Governance: The International Journal of Business in Society.

    Neave, G. (2005). The supermarketed university: Reform, vision and ambiguity in British higher education. Perspectives:.

    Nicol, D. (1972) Academic Freedom and Social Responsibility: The Tasks of Universities in a Changing World, Stephen Kertesz (Ed), Notre Dame, University of Notre Dame Press.

    Nokkala, T., & Bacevic, J. (2014). University autonomy, agenda setting and the construction of agency: The case of the European university association in the European higher education area..

    Olsen, J. P. (2007). The institutional dynamics of the European university Springer Netherlands.

    Tricker, R. I. (2015). Corporate governance: Principles, policies, and practices. Oxford University Press, USA.

    Zikmund, W.G., Babin, B.J., Carr, J.C. & Griffin, M. (2012). Business Research Methods. Boston: Cengage Learning.

    Zulu, C (2016) ‘Gender equity and equality in higher education leadership: What’s social justice and substantive equality got to do with it?’ A paper presented at the inaugural lecture, North West University, South Africa

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Why the NIH cuts are so wrong (opinion)

    Why the NIH cuts are so wrong (opinion)

    Indirect cost recovery (ICR) seems like a boring, technical budget subject. In reality, it is a major source of the long-running budget crises at public research universities. Misinformation about ICR has also confused everyone about the university’s public benefits.

    These paired problems—concealed budget shortfalls and misinformation—didn’t cause the ICR cuts being implemented by the NIH acting director, one Matthew J. Memoli, M.D. But they are the basis of Memoli’s rationale.

    Trump’s people will sustain these cuts unless academics can create an honest counternarrative that inspires wider opposition. I’ll sketch a counternarrative below.

    The sudden policy change is that the NIH is to cap indirect cost recovery at 15 percent of the direct costs of a grant, regardless of the existing negotiated rate. Multiple lawsuits have been filed challenging the legality of the change, and courts have temporarily blocked it from going into effect.

    Memoli’s notice of the cap, issued Friday, has a narrative that is wrong but internally coherent and plausible.

    It starts with three claims about the $9 billion of the overall $35 billion research funding budget that goes to indirect costs:

    • Indirect cost allocations are in zero-sum competition with direct costs, therefore reducing the total amount of research.
    • Indirect costs are “difficult for NIH to oversee” because they aren’t entirely entailed by a specific grant.
    • “Private foundations” cap overhead charges at 10 to 15 percent of direct costs and all but a handful of universities accept those grants.

    Memoli offers a solution: Define a “market rate” for indirect costs as that allowed by private foundations (Gates, Chan Zuckerberg, some others). The implication is the foundations’ rate captures real indirect costs rather than inflated or wishful costs that universities skim to pad out bloated administrations. On this analytical basis, currently wasted indirect costs will be reallocated to useful direct costs, thus increasing rather than decreasing scientific research.

    There’s a false logic here that needs to be confronted.

    The strategy so far to resist these cuts seems to focus on outcomes rather than on the actual claims or the underlying budgetary reality of STEM research in the United States. Scientific groups have called the ICR rate cap an attack on U.S. scientific leadership and on public benefits to U.S. taxpayers (childhood cancer treatments that will save lives, etc.). This is all important to talk about. And yet these claims don’t refute the NIH logic. Nor do they get at the hidden budget reality of academic science.

    On the logic: Indirect costs aren’t in competition with direct costs because direct and indirect costs pay for different categories of research ingredients.

    Direct costs apply to the individual grant: costs for chemicals, graduate student labor, equipment, etc., that are only consumed by that particular grant.

    Indirect costs, also called facilities and administrative (F&A) costs, support infrastructure used by everybody in a department, discipline, division, school or university. Infrastructure is the library that spends tens of thousands of dollars a year to subscribe to just one important journal that is consulted by hundreds or thousands of members of that campus community annually. Infrastructure is the accounting staff that writes budgets for dozens and dozens of grant applications across departments or schools. Infrastructure is the building, new or old, that houses multiple laboratories: If it’s new, the campus is still paying it off; if it’s old, the campus is spending lots of money keeping it running. These things are the tip of the iceberg of the indirect costs of contemporary STEM research.

    In response to the NIH’s social media announcement of its indirect costs rate cut, Bertha Madras had a good starter list of what indirects involve.

    Screenshot via Christopher Newfield

    And there are also people who track all these materials, reorder them, run the daily accounting, etc.—honestly, people who aren’t directly involved in STEM research have a very hard time grasping its size and complexity, and therefore its cost.

    As part of refuting the claim that NIH can just not pay for all this and therefore pay for more research, the black box of research needs to be opened up, Bertha Madras–style, and properly narrated as a collaborative (and exciting) activity.

    This matter of human activity gets us to the second NIH-Memoli claim, which involves toting up the processes, structures, systems and people that make up research infrastructure and adding up their costs. The alleged problem is that it is “difficult to oversee.”

    Very true, but difficult things can and often must be done, and that is what happens with indirect costs. Every university compiles indirect costs as a condition of receiving research grants. Specialized staff (more indirect costs!) use a large amount of accounting data to sum up these costs, and they use expensive information technology to do this to the correct standard. University staff then negotiate with federal agencies for a rate that addresses their particular university’s actual indirect costs. These rates are set for a time, then renegotiated at regular intervals to reflect changing costs or infrastructural needs.

    The fact that this process is “difficult” doesn’t mean that there’s anything wrong with it. This claim shouldn’t stand—unless and until NIH convincingly identifies specific flaws.

    As stated, the NIH-Memoli claim that decreasing funding for overhead cuts will increase science is easily falsifiable. (And we can say this while still advocating for reducing overhead costs, including ever-rising compliance costs imposed by federal research agencies. But we would do this by reducing the mandated costs, not the cap.)

    The third statement—that private foundations allow only 10 to 15 percent rates of indirect cost recovery—doesn’t mean anything in itself. Perhaps Gates et al. have the definitive analysis of true indirect costs that they have yet to share with humanity. Perhaps Gates et al. believe that the federal taxpayer should fund the university infrastructure that they are entitled to use at a massive discount. Perhaps Gates et al. use their wealth and prestige to leverage a better deal for themselves at the expense of the university just because they can. Which of these interpretations is correct? NIH-Memoli assume the first but don’t actually show that the private foundation rate is the true rate. (In reality, the second explanation is the best.)

    This kind of critique is worth doing, and it can be expanded. The NIH view reflects right-wing public-choice economics that treat teachers, scientists et al. as simple gain maximizers producing private, not public goods. This means that their negotiations with federal agencies will reflect their self-interest, while in contrast the “market rate” is objectively valid. We do need to address these false premises and bad conclusions again and again, whenever they arise.

    However, this critique is only half the story. The other half is the budget reality of large losses on sponsored research, all incurred as a public service to knowledge and society.

    Take that NIH image above. It makes no logical sense to put the endowments of three very untypical universities next to their ICR rates: They aren’t connected. It makes political narrative sense, however: The narrative is that fat-cat universities are making a profit on research at regular taxpayers’ expense, and getting even fatter.

    The only way to deal with this very effective, very entrenched Republican story is to come clean on the losses that universities incur. The reality is that existing rates of indirect cost recovery do not cover actual indirect costs, but require subsidy from the university that performs the research. ICR is not icing on the budget cake that universities can do without. ICR buys only a portion of the indirect costs cake, and the rest is purchased by each university’s own institutional funds.

    For example, here are the top 16 university recipients of federal research funds. One of the largest in terms of NIH funding (through the Department of Health and Human Services) is the University of California, San Francisco, winning $795.6 million in grants in fiscal year 2023. (The National Science Foundation’s Higher Education Research and Development (HERD) Survey tables for fiscal year 2023 are here.)

    table visualization

    UCSF’s negotiated indirect cost recovery rate is 64 percent. This means that it has shown HHS and other agencies detailed evidence that it has real indirect costs in something like this amount (more on “something like” in a minute). It means that HHS et al. have accepted UCSF’s evidence of their real indirect costs as valid.

    If the total of UCSF’s HHS $795.6 million is received with a 64 percent ICR rate, this means that every $1.64 of grant funds has $0.64 in indirect funds and one dollar in direct. The math estimates that UCSF receives about $310 million of its HHS funds in the form of ICR.

    Now, the new NIH directive cuts UCSF from 64 percent to 15 percent. That’s a reduction of about 77 percent. Reduce $310 million by that proportion and you have UCSF losing about $238 million in one fell swoop. There’s no mechanism in the directive for shifting that into the direct costs of UCSF grants, so let’s assume a full loss of $238 million.

    In Memoli’s narrative, this $238 million is the Reaganite’s “waste, fraud and abuse.” The remaining approximately $71 million is legitimate overhead as measured (wrongly) by what Gates et al. have managed to force universities to accept in exchange for the funding of their researchers’ direct costs.

    But the actual situation is even worse than this. It’s not that UCSF now will lose $238 million on their NIH research. In reality, even at (allegedly fat-cat) 64 percent ICR rates, they were already losing tons of money. Here’s another table from the HERD survey.

    table visualization

    There’s UCSF in the No. 2 national position, a major research powerhouse. It spends more than $2 billion a year on research. However, moving across the columns from left to right, you see federal government, state and local government, and then this category, “Institution Funds.” As with most of these big research universities, this is a huge number. UCSF reports to the NSF that it spends more than $500 million a year of its own internal funds on research.

    The reason? Extramurally sponsored research, almost all in science and engineering, loses massive amounts of money even at current recovery rates, day after day, year in, year out. This is not because anyone is doing anything wrong. It is because the infrastructure of contemporary science is very expensive.

    Here’s where we need to build a full counternarrative to the existing one. The existing one, shared by university administrators and Trumpers alike, posits the fiction that universities break even on research. UCSF states, “The University requires full F&A cost recovery.” This is actually a regulative ideal that has never been achieved.

    The reality is this:

    UCSF spends half a billion dollars of its own funding to support its $2 billion total in research. That money comes from the state, from tuition, from clinical revenues and some—less than you’d think—from private donors and corporate sponsors. If NIH’s cuts go through, UCSF’s internal losses on research—the money it has to make up—suddenly jump from an already-high $505 million to $743 million in the current year. This is a complete disaster for the UCSF budget. It will massively hit research, students, the campuses’ state employees, everything.

    The current strategy of chronicling the damage from cuts is good. But it isn’t enough. I’m pleased to see the Association of American Universities, a group of high-end research universities, stating plainly that “colleges and universities pay for 25 percent of total academic R&D expenditures from their own funds. This university contribution amounted to $27.7 billion in FY23, including $6.8 billion in unreimbursed F&A costs.” All university administrations need to shift to this kind of candor.

    Unless the new NIH cuts are put in the context of continuous and severe losses on university research, the public, politicians, journalists, et al. cannot possibly understand the severity of the new crisis. And it will get lost in the blizzard of a thousand Trump-created crises, one of which is affecting pretty much every single person in the country.

    Finally, our full counternarrative needs a third element: showing that systemic fiscal losses on research are in fact good, marvelous, a true public service. A loss on a public good is not a bad and embarrassing fact. Research is supposed to lose money: The university loses money on science so that society gets long-term gains from it. Science has a negative return on investment for the university that conducts it so that there is a massively positive ROI for society, of both the monetary and nonmonetary kind. Add up the education, the discoveries, the health, social, political and cultural benefits: The university courts its own endless fiscal precarity so that society benefits.

    We should also remind everyone that the only people who make money on science are in business. And even there, ROI can take years or decades. Commercial R&D, with a focus on product development and sales, also runs losses. Think of “AI”: Microsoft alone is spending $80 billion on it in 2025, on top of $50 billion in 2024, with no obviously strong revenues yet in sight. This is a huge amount of risky investment—it compares to $60 billion for federal 2023 R&D expenditures on all topics in all disciplines. I’m an AI skeptic but appreciate Microsoft’s reminder that new knowledge means taking losses and plenty of them.

    These up-front losses generate much greater future value of nonmonetary as well as monetary kinds. Look at the University of Pennsylvania, the University of Wisconsin at Madison, Harvard University, et al. in Table 22 above. The sector spent nearly $28 billion of its own money generously subsidizing sponsors’ research, including by subsidizing the federal government itself.

    There’s much more to say about the long-term social compact behind this—how the actual “private sector” gets 100 percent ICR or significantly more, how state cuts factor into this, how student tuition now subsidizes more of STEM research than is fair, how research losses have been a denied driver of tuition increases. There’s more to say about the long-term decline of public universities as research centers that, when properly funded, allow knowledge creation to be distributed widely in the society.

    But my point here is that opening the books on large everyday research losses, especially biomedical research losses of the kind NIH creates, is the only way that journalists, politicians and the wider public will see through the Trumpian lie about these ICR “efficiencies.” It’s also the only way to move toward the full cost recovery that universities deserve and that research needs.

    Source link

  • Counslr Launches in New Mexico and Illinois; Expands Footprint in New York to Increase Access to Mental Health Support

    Counslr Launches in New Mexico and Illinois; Expands Footprint in New York to Increase Access to Mental Health Support

    NEW YORK, NY – Counslr, a leading B2B mental health and wellness platform, announced today that it has expanded its footprint into the State of New Mexico starting with a partnership with Vista Nueva High School, Aztec, NM; and into the State of Illinois starting with a partnership with Big Hollow School District, Ingleside, IL. These initial partnerships will empower students and staff to prioritize their mental health by enabling them to access unlimited wellness resources and live texting sessions with Counslr’s licensed and vetted mental health support professionals, who are available on-demand, 24/7/365. By increasing accessibility to Counslr’s round-the-clock support, Vista Nueva and Big Hollow aim to bridge gaps in mental health support for students and staff, enabling those who previously did not or could not access care, whether due to cost, inconvenience, or stigma, to receive the support they desire.

    1 in 6 youth suffer from a mental illness, but the majority do not receive mental health support due to substantial obstacles to care. Additionally, mental health is even a bigger challenge in rural America due to unique barriers, including fewer providers resulting in longer wait times or insufficient access to crucial mental health services. This resource scarcity underscores the urgency for additional resources and innovative solutions to bridge this critical gap in mental health care for school communities.

    “We are happy to be able to offer students another tool that they can use to support their mental well-being. Knowing that students have been able to speak with a professional outside of school hours helps us know this app was needed and is useful,” states Rebekah Deane, Professional School Counselor, Vista Nueva High School. “We hope this tool also assists students in learning how to navigate systems so that when they graduate high school they know these options exist and they can continue to seek out support when necessary.”

    As factors such as academic pressures, social media influence, burnout and world events contribute to heightened stress levels and mental health challenges, schools throughout the country are recognizing the growing need to offer more accessible resources and preventative mental health services to both students and staff.

    “Counslr provides an extremely easy-to-access platform for those who otherwise may not seek the help they need, and we are very excited to join Counslr in this partnership. We are all very well aware of the impact that technology has had on the mental health of our students and we feel that Counslr can meet our students in a setting they are comfortable with,” states Bob Gold, Big Hollow School District Superintendent. “Outside of our students, we are thrilled to be able to offer this service to the amazing adults who work with our students every day. There are so many families dealing with some sort of trauma, and the life of an educator is no different.  These adults tend to give so much of themselves to their students, so we strongly feel that our efforts here to join with Counslr is our way of providing an opportunity for our educators to focus on their own mental health.”

    In addition to the geographic expansion,Counslr has also expanded its existing footprint in states like New York, most recently partnering with the Silver Creek Central School District to support its students and staff.  

    “We know mental health needs are on the rise, for students and adults.  To me, Counslr is a resource our students and staff both deserve,” states Dr. Katie Ralston, Superintendent, Silver Creek Central. “In the beginning stages at Silver Creek Central, it has proven to be an asset, as it offers access to everyone on the spot, any day, for any situation.”

    “Supporting diverse populations of students and faculty across the country clearly illustrates that mental health knows no boundaries,” said Josh Liss, Counslr CEO. Adding that, “With 86% of Counslr’s users being first-time care seekers, we strive to reach these silent sufferers who need help, but do not or cannot access it, no matter where they are located.”

    ABOUT COUNSLR

    Counslr is a text-based mental health support application that provides unlimited access to robust wellness resources and live texting sessions with licensed professionals, 24/7/365. Users can access support on-demand within two minutes of opening the app, or by scheduled appointment. Through real-time texting, users enjoy one-on-one, private communication with a licensed counselor that can be conducted anytime, anywhere. Counslr was designed to help individuals deal with life’s day-to-day issues, empowering individuals to address concerns while they are “small” to help ensure that they stay “small”. Counslr partners with organizations of all shapes and sizes (companies, unions, nonprofits, universities/colleges, high schools, etc) so that these entities can provide Counslr’s services to their employees/members/students at no direct cost. For more information, please visit www.counslr.com.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • How foreign aid helps the country that gives it

    How foreign aid helps the country that gives it

    In international relations, nation states vie for power and security. They do this through diplomacy and treaties which establish how they should behave towards one another.

    If those agreements don’t work, states resort to violence to achieve their goals. 

    In addition to diplomatic relations and wars, states can also project their interests through soft power. Dialogue, compromise and consensus are all part of soft power. 

    Foreign assistance, where one country provides money, goods or services to another without implicitly asking for anything in return, is a form of soft power because it can make a needy nation dependent or beholden to a wealthier one. 

    In 2023, the U.S. government had obligations to provide some $68 billion in foreign aid spread across more than 10 agencies to more than 200 countries. The U.S. Agency for International Development (USAID) alone spent $38 billion in 2023 and operated in 177 different countries. 

    Spreading good will through aid

    USAID has been fundamental to projecting a positive image of the United States throughout the world. In an essay published by the New York Times, Samantha Power, the former administrator of USAID, described how nearly $20 billion of its assistance went to health programs that combat such things as malaria, tuberculosis, H.I.V./AIDS and infectious disease outbreaks, and humanitarian assistance to respond to emergencies and help stabilize war-torn regions.

    Other USAID investments, she wrote, give girls access to education and the ability to enter the work force. 

    When President John F. Kennedy established USAID in 1961, he said in a message to Congress: “We live at a very special moment in history. The whole southern half of the world — Latin America, Africa, the Middle East, and Asia — are caught up in the adventures of asserting their independence and modernizing their old ways of life. These new nations need aid in loans and technical assistance just as we in the northern half of the world drew successively on one another’s capital and know-how as we moved into industrialization and regular growth.”

    He acknowledged that the reason for the aid was not totally humanitarian.

    “For widespread poverty and chaos lead to a collapse of existing political and social structures which would inevitably invite the advance of totalitarianism into every weak and unstable area,” Kennedy said. “Thus our own security would be endangered and our prosperity imperilled. A program of assistance to the underdeveloped nations must continue because the nation’s interest and the cause of political freedom require it.” 

    Investing in emerging democracies

    The fear of communism was obvious in 1961. The motivation behind U.S. foreign assistance is always both humanitarian and political; the two can never be separated. 

    Today, the United States is competing with China and its Belt and Road Initiative (BRI) for global influence through foreign assistance. The BRI was started by Chinese President Xi Jinping in 2023. It is global, with its Silk Road Economic Belt connecting China with Central Asia and Europe, and the 21st Century Maritime Silk Road connecting China with South and Southeast Asia and Africa and Latin America.

    Most of the projects involve infrastructure improvement — things like roads and bridges, mass transit and power supplies — and increased trade and investment. 

    As of 2013, 149 countries have joined BRI. In the first half of 2023, a total of $43 billion in agreements were signed. Because of its lending policy, BRI lending has made China the world’s largest debt collector.

    While the Chinese foreign assistance often requires repayment, the United States has dispensed money through USAID with no direct feedback. Trump thinks that needs to be changed. “We get tired of giving massive amounts of money to countries that hate us, don’t we?” he said on 27 January 2024. 

    Returns are hard to see.

    Traditionally, U.S. foreign assistance, unlike the Chinese BRI, has not been transactional. There is no guarantee that what is spent will have a direct impact. Soft power is not quantifiable. Questions of image, status and prestige are hard to measure.

    Besides helping millions of people, Samantha Power gave another more transactional reason for supporting U.S. foreign assistance.

    “USAID has generated vast stores of political capital in the more than 100 countries where it works, making it more likely that when the United States makes hard requests for other leaders — for example — to send peace keepers to a war zone, to help a U.S. company enter a new market or to extradite a criminal to the United States — they say yes,” she wrote.

    Trump is known as a “transactional” president, but even this argument has not convinced him to continue to support USAID. 

    Soft power is definitely not part of his vision of the art of the deal.


     

    Three questions to consider:

    1. What is “foreign aid”?
    2. Why would one country give money to another without asking for anything in return?
    3. Do you think wealthier nations should be obliged to help poorer countries?


     

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link

  • Which colleges gained R1 status under the revamped Carnegie Classifications?

    Which colleges gained R1 status under the revamped Carnegie Classifications?

    This audio is auto-generated. Please let us know if you have feedback.

    The American Council on Education on Thursday released the latest list of research college designations under the revamped Carnegie Classifications, labeling 187 institutions as Research 1 institutions. 

    The coveted R1 designation is given to universities with the highest levels of research activity. The number of colleges designated as R1 institutions in 2025 rose 28% compared with the last time the list was released, in 2022. 

    The updated list of research institutions is the first that ACE and the Carnegie Foundation for the Advancement of Teaching have released since they updated their methodology for the classifications. The new methodology was created in part to simplify a previously complex formula that left institutions fearful about losing their status. 

    “We hope this more modernized version of Carnegie Classifications will answer more questions in a more sophisticated way about institutions and their position in the ecosystem and will allow decisions to be made much more precisely by philanthropists, by governments, and by students and families,” Ted Mitchell, president of ACE, told Higher Ed Dive.

    Thirty-two institutions moved from the second-highest research level in 2022 — commonly called Research 2, or R2 — to the R1 designation. That group includes Howard University, a historically Black college in Washington, D.C. The private college — which announced a record $122 million in research grants and contracts in 2022 — is the only HCBU with the designation. 

    Other colleges that moved from R2 to R1 include public institutions like the University of Idaho, University of North Dakota, University of Rhode Island, University of Vermont and the University of Wyoming, along with private colleges like Lehigh University, in Pennsylvania, and American University, in Washington, D.C. 

    Just one institution dropped from R1 to R2 status — the University of Alabama in Huntsville. 

    For universities to achieve R1 status under the new methodology, they must spend an average of $50 million on research and development each year and award 70 or more research doctorates. 

    R2 institutions need to spend an average of $5 million per year on research and award 20 or more research doctorates. 

    Previously, the methodology was more complex. In order to keep the R1 and R2 groups of equal size, classifiers determined the line between the two designations with each cycle. They also looked at 10 different variables to determine R1 status. 

    “The previous methodology was opaque and I think led institutions to spend more time trying to figure out what the methodology actually was, perhaps distracting them from more important work,” said Timothy Knowles, president of the Carnegie Foundation. “Institutions that are close to the bar will just be much clearer about what they have to do to get over the bar.”

    The latest crop of R1 institutions have each spent $748.4 million on research and development on average annually from fiscal 2021 to fiscal 2023. During that same period, they have annually awarded an average of 297 research doctorates. 

    Texas led the list of states with the most R1 institutions, with 16. California and New York followed closely behind with 14 and 12 institutions, respectively. 

    The 139 R2 institutions on this latest list each spent an average of $55.17 million annually over three years on research and development — just beating the threshold for R1 status. However, they produced an average of only 49 research doctorates per year. 

    This year also marks the first time the classifications have included a new designation: RCU, or research colleges and universities. The new category is meant to recognize institutions that regularly conduct research but don’t confer doctoral degrees. These colleges only need to spend more than an average of $2.5 million annually on research to be recognized as RCUs. 

    This year, 215 colleges and universities have reached that status. Many are master’s- and baccalaureate-level institutions. And some are four-year colleges with a “special focus,” such as medical schools and centers. 

    Two tribal colleges have also reached RCU status: Diné College, in Arizona, and Northwest Indian College, in Washington.

    Source link