Blog

  • Q&A with retiring National Student Clearinghouse CEO

    Q&A with retiring National Student Clearinghouse CEO

    Ricardo Torres, the CEO of the National Student Clearinghouse, is retiring next month after 17 years at the helm. His last few weeks on the job have not been quiet.

    On Jan. 13, the clearinghouse’s research team announced they had found a significant error in their October enrollment report: Instead of freshman enrollment falling by 5 percent, it actually seemed to have increased; the clearinghouse is releasing its more complete enrollment report tomorrow. In the meantime, researchers, college officials and policymakers are re-evaluating their understanding of how 2024’s marquee events, like the bungled FAFSA rollout, influenced enrollment; some are questioning their reliance on clearinghouse research.

    It’s come as a difficult setback at the end of Torres’s tenure. He established the research center in 2010, two years after becoming CEO, and helped guide it to prominence as one of the most widely used and trusted sources of postsecondary student data.

    The clearinghouse only began releasing the preliminary enrollment report, called the “Stay Informed” report, in 2020 as a kind of “emergency measure” to gauge the pandemic’s impact on enrollment, Torres told Inside Higher Ed. The methodological error in October’s report, which the research team discovered this month, had been present in every iteration since. And a spokesperson for the clearinghouse said that after reviewing the methodology for their “Transfer and Progress” report, which they’ve released every February since 2023, was also affected by the miscounting error; the 2025 report will be corrected, but the last two were skewed.

    Torres said the clearinghouse is exploring discontinuing the “Stay Informed” report entirely.

    Such a consequential snafu would put a damper on anyone’s retirement and threaten to tarnish their legacy. But Torres is used to a little turbulence: He oversaw the clearinghouse through a crucial period of transformation, from an arm of the student lending sector to a research powerhouse. He said the pressure on higher ed researchers is only going to get more intense in the years ahead, given the surging demand for enrollment and outcomes data from anxious college leaders and ambitious lawmakers. Transparency and integrity, he cautioned, will be paramount.

    His conversation with Inside Higher Ed, edited for length and clarity, is below.

    Q: You’ve led the clearinghouse since 2008, when higher ed was a very different sector. How does it feel to be leaving?

    A: It’s a bit bittersweet, but I feel like we’ve accomplished something during my tenure that can be built upon. I came into the job not really knowing about higher ed; it was a small company, a $13 million operation serving the student lending industry. We were designed to support their fundamental need to understand who’s enrolled and who isn’t, for the purposes of monitoring student loans. As a matter of fact, the original name of the organization was the National Student Loan Clearinghouse. When you think about what happened when things began to evolve and opportunities began to present themselves, we’ve done a lot.

    Q: Tell me more about how the organization has changed since the days of the Student Loan Clearinghouse.

    A: Frankly, the role and purpose of the clearinghouse and its main activities have not changed in about 15 years. The need was to have a trusted, centralized location where schools could send their information that then could be used to validate loan status based on enrollments. The process, prior to the clearinghouse, was loaded with paperwork. The registrars that are out there now get this almost PTSD effect when they go back in time before the clearinghouse. If a student was enrolled in School A, transferred to School B and had a loan, by the time everybody figured out that you were enrolled someplace else, you were in default on your loan. We were set up to fix that problem.

    What made our database unique at that time was that when a school sent us enrollment data, they had to send all of the learners because they actually didn’t know who had a previous loan and who didn’t. That allowed us to build a holistic, comprehensive view of the whole lending environment. So we began experimenting with what else we could do with the data.

    Our first observation was how great a need there was for this data. Policy formulation at almost every level—federal, state, regional—for improving learner outcomes lacked the real-time data to figure out what was going on. Still, democratizing the data alone was insufficient because you need to convert that insight into action of some kind that is meaningful. What I found as I was meeting schools and individuals was that the ability and the skill sets required to convert data to action were mostly available in the wealthiest institutions. They had all the analysts in the world to figure out what the hell was going on, and the small publics were just scraping by. That was the second observation, the inequity.

    The third came around 2009 to 2012, when there was an extensive effort to make data an important part of decision-making across the country. The side effect of that, though, was that not all the data sets were created equal, which made answering questions about what works and what doesn’t that much more difficult.

    The fourth observation, and I think it’s still very relevant today, is that the majority of our postsecondary constituencies are struggling to work with the increasing demands they’re getting from regulators: from the feds, from the states, from their accreditors, the demand for reports is increasing. The demand for feedback is increasing. Your big institutions, your flagships, might see this as a pain in the neck, but I would suggest that your smaller publics and smaller private schools are asking, “Oh my gosh, how are we even going to do this?” Our data helps.

    Q: What was the clearinghouse doing differently in terms of data collection?

    A: From the postsecondary standpoint, our first set of reports that we released in 2011 focused on two types of learners that at most were anecdotally referred to: transfer students and part-time students. The fact that we included part-time students, which [the Integrated Postsecondary Education Data System] did not, was a huge change. And our first completion report, I believe, said that over 50 percent of baccalaureate recipients had some community college in their background. That was eye-popping for the country to see and really catalyzed a lot of thinking about transfer pathways.

    We also helped spur the rise of these third-party academic-oriented organizations like Lumina and enabled them to help learners by using our data. One of our obligations as a data aggregator was to find ways to make this data useful for the field, and I think we accomplished that. Now, of course, demand is rising with artificial intelligence; people want to do more. We understand that, but we also think we have a huge responsibility as a data custodian to do that responsibly. People who work with us realize how seriously we take that custodial relationship with the data. That has been one of the hallmarks of our tenure as an organization.

    Q: Speaking of custodial responsibility, people are questioning the clearinghouse’s research credibility after last week’s revelation of the data error in your preliminary enrollment report. Are you worried it will undo the years of trust building you just described? How do you take accountability?

    A: No. 1: The data itself, which we receive from institutions, is reliable, current and accurate. We make best efforts to ensure that it accurately represents what the institutions have within their own systems before any data is merged into the clearinghouse data system.

    When we first formed the Research Center, we had to show how you can get from the IPEDS number to the clearinghouse number and show people our data was something they could count on. We spent 15 years building this reputation. The key to any research-related error like this is, first, you have to take ownership of it and hold yourself accountable. As soon as I found out about this we were already making moves to [make it public]—we’re talking 48 hours. That’s the first step in maintaining trust.

    That being said, there’s an element of risk built into this work. Part of what the clearinghouse brings to the table is the ability to responsibly advance the dialogue of what’s happening in education and student pathways. There are things that are happening out there, such as students stopping out and coming back many years later, that basically defy conventional wisdom. And so the risk in all of this is that you shy away from that work and decide to stick with the knitting. But your obligation is, if you’re going to report those things, to be very transparent. As long as we can thread that needle, I think the clearinghouse will play an important role in helping to advance the dialogue.

    We’re taking this very seriously and understand the importance of the integrity of our reports considering how the field is dependent on the information we provide. Frankly, one of the things we’re going to take a look at is, what is the need for the preliminary report at the end of the day? Or do we need to pair it with more analysis—is it just enough to say that total enrollments are up X or down Y?

    Q: Are you saying you may discontinue the preliminary report entirely?

    A: That’s certainly an option. I think we need to assess the field’s need for an early report—what questions are we trying to answer and why is it important that those questions be answered by a certain time? I’ll be honest; this is the first time something like this has happened, where it’s been that dramatic. That’s where the introspection starts, saying, “Well, this was working before; what the heck happened?”

    When we released the first [preliminary enrollment] report [in 2020], we thought it’d be a one-time thing. Now, we’ve issued other reports that we thought were going to be one-time and ended up being a really big deal, like “Some College, No Credential.” We’re going to continue to look for opportunities to provide those types of insights. But I think any research entity needs to take a look at what you’re producing to make sure there’s still a need or a demand, or maybe what you’re providing needs to pivot slightly. That’s a process that’s going to be undertaken over the next few months as we evaluate this report and other reports we do.

    Q: How did this happen, exactly? Have you found the source of the imputation error?

    A: The research team is looking into it. In order to ensure for this particular report that we don’t extrapolate this to a whole bunch of other things, you just need to make sure that you know you’ve got your bases covered analytically.

    There was an error in how we imputed a particular category of dual-enrolled students versus freshmen. But if you look at the report, the total number of learners wasn’t impacted by that. These preliminary reports were designed to meet a need after COVID, to understand what the impact was going to be. We basically designed a report on an emergency basis, and by default, when you don’t have complete information, there’s imputation. There’s been a lot of pressure on getting the preliminary fall report out. That being said, you learn your lesson—you gotta own it and then you keep going. This was very unfortunate, and you can imagine the amount of soul searching to ensure that this never happens again.

    Q: Do you think demand for more postsecondary data is driving some irresponsible analytic practices?

    A: I can tell you that new types of demands are going to be put out there on student success data, looking at nondegree credentials, looking at microcredentials. And there’s going to be a lot of spitballing. Just look at how ROI is trying to be calculated right now; I could talk for hours about the ins and outs of ROI methodology. For example, if a graduate makes $80,000 after graduating but transferred first from a community college, what kind of attribution does the community college get for that salary outcome versus the four-year school? Hell, it could be due to a third-party boot camp done after earning a degree. Research on these topics is going to be full of outstanding questions.

    Q: What comes next for the clearinghouse’s research after you leave?

    A: I’m excited about where it’s going. I’m very excited about how artificial intelligence can be appropriately leveraged, though I think we’re still trying to figure out how to do that. I can only hope that the clearinghouse will continue its journey of support. Because while we don’t directly impact learner trajectories, we can create the tools that help people who support learners every year impact those trajectories. Looking back on my time here, that’s what I’m most proud of.

    Source link

  • The rise of multidisciplinary research stimulated by AI

    The rise of multidisciplinary research stimulated by AI

    AI research tools such as OpenAI o1 have now reached test score levels that meet or exceed the scores of those who hold Ph.D. degrees in the sciences and a number of other fields. These generative AI tools utilize large language models that include research and knowledge across many disciplines. Increasingly, they are used for research project ideation and literature searches. The tools are generating interesting insights to researchers that they may not have been exposed to in years gone by.

    The field of academe has long emphasized the single-discipline research study. We offer degrees in single disciplines; faculty members are granted appointments most often in only one department, school or college; and for the most part, our peer-reviewed academic journals are in only one discipline, although sometimes they welcome papers from closely associated or allied fields. Dissertations are most commonly based in a single discipline. Although research grants are more often multidisciplinary and prioritize practical solution-finding, a large number remain focused on one field of study.

    The problem is that as we advance our knowledge and application expertise in one field, we can become unaware of important developments in other fields that directly or indirectly impact the study in our chosen discipline. Innovation is not always a single-purpose, straight-line advance. More often today, innovation comes from the integration of knowledge of disparate fields such as sociology, engineering, ecology and environmental developments, and expanding understanding of quantum physics and quantum computing. Until recently, we have not had an efficient way to identify and integrate knowledge and perspectives from fields that, at first glance, seem unrelated.

    AI futurist and innovator Thomas Conway of Algonquin College of Applied Arts and Technology addresses this topic in “Harnessing the Power of Many: A Multi-LLM Approach to Multidisciplinary Integration”:

    “Amidst the urgency of increasingly complex global challenges, the need for integrative approaches that transcend traditional disciplinary boundaries has never been more critical. Climate change, global health crises, sustainable development, and other pressing issues demand solutions from diverse knowledge and expertise. However, effectively combining insights from multiple disciplines has long been a significant hurdle in academia and research.

    “The Multi-LLM Iterative Prompting Methodology (MIPM) emerges as a transformative solution to this challenge. MIPM offers a structured yet flexible framework for promoting and enhancing multidisciplinary research, peer review, and education. At its core, MIPM addresses the fundamental issue of effectively combining diverse disciplinary perspectives to lead to genuine synthesis and innovation. Its transformative potential is a beacon of hope in the face of complex global challenges.”

    Even as we integrate AI research tools and techniques, we, ourselves, and our society at large are changing. Many of the common frontier language models powering research tools are multidisciplinary by nature, although some are designed with strengths in specific fields. Their responses to our prompts are multidisciplinary. The response to our iterative follow-up prompts can take us to fields and areas of expertise of which we were not previously aware. The replies are not coming solely from a single discipline expert, book or other resource. They are coming from a massive language model that spans disciplines, languages, cultures and millennia.

    As we integrate these tools, we too will naturally become aware of new and emerging perspectives, research and developments generated by fields that are outside our day-to-day knowledge, training and expertise. This will expand our perspectives beyond the fields of our formal study. As the quality of our AI-based research tools expands, their impact on research cannot be overstated. It will lead us in new directions and broader perspectives, uncovering the potential for new knowledge, informed by multiple disciplines. One recent example is Storm, a brainstorming tool developed by the team at Stanford’s Open Virtual Assistant Lab (OVAL):

    “The core technologies of the STORM&Co-STORM system include support from Bing Search and GPT-4o mini. The STORM component iteratively generates outlines, paragraphs, and articles through multi-angle Q&A between ‘LLM experts’ and ‘LLM hosts.’ Meanwhile, Co-STORM generates interactive dynamic mind maps through dialogues among multiple agents, ensuring that no information needs overlooked by the user. Users only need to input an English topic keyword, and the system can generate a high-quality long text that integrates multi-source information, similar to a Wikipedia article. When experiencing the STORM system, users can freely choose between STORM and Co-STORM modes. Given a topic, STORM can produce a structured high-quality long text within 3 minutes. Additionally, users can click ‘See BrainSTORMing Process’ to view the brainstorming process of different LLM roles. In the ‘Discover’ section, users can refer to articles and chat examples generated by other scholars, and personal articles and chat records can also be found in the sidebar ‘My Library.’”

    More about Storm is available at https://storm.genie.stanford.edu/.

    One of the concerns raised by skeptics at this point in the development of these research tools is the security of prompts and results. Few are aware of the opportunities for air-gapped or closed systems and even the ChatGPT temporary chats. In the case of OpenAI, you can start a temporary chat by tapping the version of ChatGPT you’re using at the top of the GPT app, and selecting temporary chat. I do this commonly in using Ray’s eduAI Advisor. OpenAI says that in the temporary chat mode results “won’t appear in history, use or create memories, or be used to train our models. For safety purposes, we may keep a copy for up to 30 days.” We can anticipate these kinds of protections will be offered by other providers. This may provide adequate security for many applications.

    Further security can be provided by installing a stand-alone instance of the LLM database and software in an air-gapped computer that maintains data completely disconnected from the internet or any other network, ensuring an unparalleled level of protection. Small language models and medium-size models are providing impressive results, approaching and in some cases exceeding frontier model performance while storing all data locally, off-line. For example, last year Microsoft introduced a line of SLM and medium models:

    “Microsoft’s experience shipping copilots and enabling customers to transform their businesses with generative AI using Azure AI has highlighted the growing need for different-size models across the quality-cost curve for different tasks. Small language models, like Phi-3, are especially great for:

    • Resource constrained environments including on-device and offline inference scenarios
    • Latency bound scenarios where fast response times are critical.
    • Cost constrained use cases, particularly those with simpler tasks.”

    In the near term we will find turnkey private search applications that will offer even more impressive results. Work continues on rapidly increasing multidisciplinary responses to research on an ever-increasing number of pressing research topics.

    The ever-evolving AI research tools are now providing us with responses from multiple disciplines. These results will lead us to engage in more multidisciplinary studies that will become a catalyst for change across academia. Will you begin to consider cross-discipline research studies and engage your colleagues from other fields to join you in research projects?

    Source link

  • Laken Riley Act passes Senate

    Laken Riley Act passes Senate

    The House is preparing to take up the Laken Riley Act later this week after the Senate passed the bill Monday, Politico reported.

    Twelve Democrats joined all of the higher chamber’s Republicans to vote for the immigration bill, named for a 22-year-old woman killed by an undocumented immigrant in Georgia last year. Immigration policy experts say the bill could have consequences for international students applying to study in the U.S.

    The bill would primarily force harsher detention policies for undocumented immigrants charged with crimes, but it also expands the power of state attorneys general, allowing them to sue the federal government and seek sweeping bans on visas from countries that won’t take back deportees. 

    The Department of Homeland Security has said the bill would require billions of dollars in additional funding to enforce.

    The legislation now goes back to the House, which passed a similar but not identical bill earlier this month. If it passes the House a second time, it would then land on President Donald Trump’s desk, providing an early win on one of his highest-priority issues, immigration.

    Source link

  • chief education solutions officer at Michigan

    chief education solutions officer at Michigan

    James DeVaney and the Center for Academic Innovation at the University of Michigan are no strangers to this community. James has a number of titles at U-M, including special adviser to the president, associate vice provost for academic innovation and founding executive director of the Center for Academic Innovation. Today, I’m talking to James about a new leadership role he is recruiting for at CAI, that of the chief education solutions officer.

    Q: What is the university’s mandate behind this role? How does it help align with and advance the university’s strategic priorities?

    A: First of all, thank you for the opportunity to share more about this exciting new position. I’m thrilled about the potential of this role and the chance to welcome a new colleague to the Center for Academic Innovation—an extraordinary organization that I care deeply about—who will join us in shaping the future of education.

    The inaugural chief education solutions officer (CESO) is pivotal to CAI’s mission to collaborate across campus and around the world to create equitable, lifelong educational opportunities for learners everywhere. By helping CAI deliver offerings that are learner-centered, research-driven, scalable and sustainable, the CESO will directly support the University of Michigan’s Vision 2034, particularly the impact area of life-changing education.

    This role is designed for a dynamic leader ready to solve organizational learning and workforce development challenges while driving growth through innovative, impactful solutions. By developing scalable and sustainable educational models, the CESO will ensure U-M remains at the forefront of lifelong learning and talent development on a global scale.

    The CESO is not just about executing current strategies—it’s a leadership role charged with helping to forge a bold new path for education. By addressing emerging trends like workforce transformation, AI and the growing demand for upskilling, this role will help learners and organizations thrive in a rapidly evolving world. The CESO’s work will empower learners and position U-M as a leader in education innovation for generations to come.

    Q: Where does the role sit within the university structure? How will the person in this role engage with other units and leaders across campus?

    A: The CESO will report directly to me in my capacity as the founding executive director of the Center for Academic Innovation and will be an additional key member of the senior leadership team at CAI. This role sits at the intersection of education innovation, strategic partnerships and business development, ensuring seamless collaboration between external stakeholders and CAI’s internal teams.

    The CESO will work closely with units that already engage with industry and organizational partners and schools and colleges across campus that extend their reach through innovative programs and initiatives. Through these collaborations, the CESO will help identify and deliver innovative solutions to meet workforce development needs and support sustainable partnerships with organizations looking to support their current and future employees in a rapidly changing economy.

    For example, the CESO might work with a school to design a custom program for an industry partner, collaborate with units across campus to expand U-M’s impact in key markets, help an organization to effectively utilize Michigan Online offerings or integrate CAI’s expertise into new initiatives that benefit learners and organizations alike. This role is about connecting ideas, people and resources to drive impact. By aligning CAI’s innovative capabilities with partner needs, the CESO ensures U-M’s resources create transformative outcomes both on campus and beyond.

    Q: What would success look like in one year? Three years? Beyond?

    A: Success in this role is all about creating momentum—whether by building early partnerships, driving measurable growth or laying the groundwork for transformative initiatives. Here’s what we envision at each stage of this journey:

    In one year: The CESO will have established a strong foundation for growth by building early partnerships with industry leaders, meeting key growth targets and launching initial programs that deliver measurable value for learners and organizations. This first year is about setting the stage—building relationships, aligning CAI’s capabilities with external needs and creating momentum for the future. Importantly, the CESO will work alongside a really talented senior leadership team. Year one is also about creating strong connections within this group, building trust and finding ways to support each other.

    In three years: The CESO will have significantly scaled CAI’s impact, with a portfolio of partnerships that reflect innovative, sustainable approaches to workforce development and lifelong learning. Internally, we’ll see streamlined systems for managing partnerships, delivering programs and providing exemplary relationship support. Externally, CAI will be recognized as a trusted leader in educational solutions that address real-world challenges through highly relevant programs that build on interdisciplinary breadth of excellence.

    Beyond three years: Long-term success means driving transformative innovation in education—at both the individual and organizational levels. The CESO’s work will have deepened CAI’s reputation for empowering learners everywhere while also positioning U-M as a leader in lifelong learning and workforce development. The legacy of this role will be an ecosystem of partnerships and programs that inspire and uplift learners across the globe.

    At every stage, success in this role is about creating meaningful, lasting impact for learners and partners. That said, I’m looking to hire a colleague who will not only embrace this vision of success but also challenge it—pushing us to explore uncharted possibilities and reach new heights we haven’t yet imagined.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: The CESO role is an incredible opportunity for someone looking to advance their career in business development, partnership leadership or workforce innovation—whether within higher education or in related industries.

    This role provides direct experience in managing high-impact partnerships, driving revenue growth and designing innovative learning solutions for diverse audiences. It’s a unique combination of strategic thinking, relationship management and educational innovation that builds a strong foundation for future leadership roles.

    The skills developed in this position—including expertise in lifelong learning, workforce transformation and sustainable business growth—are highly transferable to roles in education, industry or even global organizations. Whether leading similar initiatives at another institution or shaping workforce strategies for a global enterprise, the CESO will leave this role with the tools to make an even bigger impact.

    This position enhances vital leadership skills, such as building trust with stakeholders, navigating complex organizational challenges and creating scalable solutions. It’s a perfect launchpad for individuals ready to shape the future of education at the intersection of academia and industry.

    Joining this team means stepping into a vibrant, forward-thinking environment where your contributions will be valued, your ideas will have impact and you’ll have the space to grow, innovate and truly make a difference.

    I’m truly excited to welcome a dynamic new partner to our team—could it be you?

    Source link

  • Rethinking the value of internationalisation in higher education

    Rethinking the value of internationalisation in higher education

    Yesterday, we published a piece by SOAS Vice-Chancellor Adam Habib and Lord Dr. Michael Hastings, Chair of the SOAS Board of Trustees, on equitable transnational partnerships. In today’s piece, Dana Gamble, Policy Manager (Skills, Innovation and International) at GuildHE and Dr Esther Wilkinson, Director of Innovation and Learning at Royal Agricultural University and Chair of the GuildHE International Network, look again at international partnerships and how institutions can be proactive and productive on the international stage.

    It is not news that the higher education sector’s relationship with international activity is strained, from recruiting students to delivering research and innovation partnerships with institutions overseas. While significant financial pressures have built up through institutional reliance on international student fees, this is far from the only headwind the sector currently faces on international delivery. Recent political motivations and wider geopolitical factors have contributed to policy churn on visa policies and delayed, or scrapped, funding arrangements such as Horizon Europe and the European Regional Development Fund. Ultimately, this landscape has led institutions to prioritise developing short-term partnerships to solve long-term problems. These forces combined are affecting the UK’s global reputation as a competitive destination for education and research.

    Looking back to inform the future

    It is important to reflect and scrutinise how we got here. In a context where the UK has the lowest levels of public spending on tertiary education in the OECD, the UK’s higher education institutions have strategically used international activity to fill financial shortfalls. Whether that might be international student fees to fill deficits in domestic teaching and research income or transnational delivery to increase income without the overheads, these interventions have typically been siloed ventures designed specifically to fill gaps.

    With this approach running out of steam for many, institutions are turning the dial towards focusing on responsible, holistic and trusted partnerships with international institutions that contribute to multiple, mutual aims. This approach, in the long term, should stimulate a steadier international partnership environment that does not rely on quick-fix activity to shoulder the UK’s funding deficits. While many higher education institutions have embraced this type of internationalisation, specialist and vocational institutions often already excel in this area, particularly when creating strong, skills-based, and mutually beneficial partnerships due to their strong links with industry and communities.

    Specialist and vocationally-focused institutions have international reach and relevance

    These institutions often operate in sectors where local and global contexts are deeply intertwined. Whether addressing global environmental challenges, healthcare crises, or creative and technological innovation, a responsible international partnership should consider not only the exchange of knowledge but also the socio-economic and environmental implications of that exchange.

    By focusing on real-world skills and sector-specific expertise, these institutions bring a practical dimension to international collaborations that go beyond traditional learning, innovation and research, offering valuable lessons on how to engage globally to tackle economic and social issues with purpose.

    RAU shows how holistic international collaborations can deliver impact

    The GuildHE member, the Royal Agricultural University (RAU), has a long history of establishing, nurturing and successfully developing long-term strategic partnerships. Agriculture, climate change and food security are global issues that require international collaboration to address critical challenges across rural development, land management and sustainable farming practices.

    RAU has multiple partners including in China, Uzbekistan, the United Arab Emirates (Sharjah) and Ukraine. It is one of the most trusted UK education providers in China and has been awarded the highest accolade by the Chinese Ministry of Education for its provision, the only specialist UK university to have this status in China. In Uzbekistan, RAU is a founding partner of the International Agricultural University, an institution jointly established with the Uzbek Government to ensure students have access to high-quality education to contribute to the economic, social, and cultural development of the country. RAU’s research, training, exchanges, and teaching partnerships with Sumy National Agrarian University in Ukraine have steadily built maturity. The partnership has led various international projects such as the evaluation of the damage to Ukrainian soil due to the current conflict, which has helped ensure the long-term viability of the agricultural economy in the country. RAU has worked to support Sharjah in establishing the University of Al Dhaid, enabling capacity building, development and delivery of education in sustainable agriculture, a feature of RAU’s ability to be flexible and agile due to its size.

    RAU takes particular pride in the breadth and depth of its global relationships, with a synergistic and strategically aligned approach. Through such broad, multifaceted collaborations, RAU provides expertise and knowledge to help develop global agricultural sectors while enriching the educational experience of its students. As demonstrated in this example, vocational and specialist institutions are making particular efforts to establish, maintain and refresh international partnerships for longer-term benefits, focusing on multi-pronged international collaboration, enhancing cross-cultural understanding, and driving global innovation.

    Expanding international partnerships takes work but can pay dividends

    The internationalisation of higher education will always be shaped by global politics; education, work and skills policy; and the financial state of the sector. To reach stable waters through these domestic and global pressures, higher education institutions need to re-focus on their institutional strengths and start becoming proactive internationally. This can only be achieved, however, through supportive government policy that does not continue to discourage the sector from investing in sustainable, long-term and effective partnerships. This predominantly means establishing financial security for the full diversity of the sector to protect the foundation of specialist industries, and the future of the public sector and student choice – both domestically and internationally.

    Additionally, reform is needed to the research and innovation system so it purposefully generates economic and social impact for all sectors, and on all scales. And finally, the development of properly-resourced, effective student and staff exchange programmes is needed to provide equality of opportunity for students at every institution, with intention.

    With this government’s plans to link immigration policy more closely to skills policy and labour market pressures through Skills England, as well as the ambitions of the industrial strategy, higher education needs to be acknowledged as the future of economic growth through its role in the development of the workforce, diffusion of applied research and as leaders of global innovation. With this critical role, a holistic approach to partnerships will be vital to the effective implementation of these new strategies, and in helping to maintain the UK’s reputation as a global leader in learning, innovation and research.

    Source link

  • Assume the Best: Trust-Based Strategies for Empowering College Students – Faculty Focus

    Assume the Best: Trust-Based Strategies for Empowering College Students – Faculty Focus

    Source link

  • Making SEISA official | Wonkhe

    Making SEISA official | Wonkhe

    Developing a new official statistic is a process that can span several years.

    Work on SEISA began in 2020 and this blog outlines the journey to official statistics designation and some key findings that have emerged along the way. Let’s firstly recap why HESA needed a new deprivation index.

    The rationale behind pursuing this project stemmed from an Office for Statistics Regulation (OSR) report which noted that post-16 education statistics lacked a UK-wide deprivation metric. Under the Code of Practice for Statistics, HESA are required to innovate and fill identified statistical gaps that align with our area of specialism.

    Fast forward almost six years and the UK Statistics Authority have reiterated the importance of UK-wide comparable statistics in their response to the 2024 Lievesley Review.

    Breaking down barriers

    While higher education policy may be devolved, all nations have ambitions to ensure there is equal opportunity for all. Policymakers and the higher education sector agree that universities have a pivotal role in breaking down barriers to opportunity and that relevant data is needed to meet this mission. Having UK-wide comparable statistics relating to deprivation based on SEISA can provide the empirical evidence required to understand where progress is being made and for this to be used across the four nations to share best practice.

    In developing SEISA, we referred to OSR guidance to produce research that examines the full value of a new statistic before it is classed as an ‘official statistic in development’. We published a series of working papers in 2021 and 2022, with the latter including comparisons to the Indices of Deprivation (the main area-based measure utilised among policymakers at present). We also illustrated why area-based measures remain useful in activities designed to promote equal opportunity.

    Our research indicated that the final indexes derived from the Indices of Deprivation in each nation were effective at catching deprived localities in large urban areas, such as London and Glasgow, but that SEISA added value by picking up deprivation in towns and cities outside of these major conurbations. This included places located within former mining, manufacturing and industrial communities across the UK, like Doncaster or the Black Country in the West Midlands, as well as Rhondda and Caerphilly in Wales. The examples below come from our interactive maps for SEISA using Census 2011 data.

    An area of Doncaster that lies within decile 4 of the English Index of Multiple Deprivation (2019)

    An area of Caerphilly that lies within decile 5 of the Welsh Index of Multiple Deprivation (2019)

    We also observed that SEISA tended to capture a greater proportion of rural areas in the bottom quintile when compared with the equivalent quintile of the Index of Multiple Deprivation in each nation.

    Furthermore, in Scotland, the bottom quintile of the Scottish Index of Multiple Deprivation does not contain any locations in the Scottish islands, whereas the lowest quintile of SEISA covers all council areas in the country. These points are highlighted by the examples below from rural Shropshire and the Shetland Islands, which also show the benefit that SEISA offers by being based on smaller areas (in terms of population size) than those used to form the Indices of Deprivation. That is, drawing upon a smaller geographic domain enables pockets of deprivation to be identified that are otherwise surrounded by less deprived neighbourhoods.

    A rural area of Shropshire that is placed in decile 5 of the English Index of Multiple Deprivation (2019)

    An area of the Shetland Islands that is within decile 7 of the Scottish Index of Multiple Deprivation (2020)

    Becoming an official statistic

    Alongside illustrating value, our initial research had to consider data quality and whether our measure correlated with deprivation as expected. Previous literature has highlighted how the likelihood of experiencing deprivation increases if you are a household that is;

    • On a low income
    • Lives in social housing
    • A lone parent family
    • In poor health

    Examining how SEISA was associated with these variables gave us the assurance that it was ready to become an ‘official statistic in development’. As we noted when we announced our intention for the measure to be assigned this badge for up to two years, a key factor we needed to establish during this time period was the consistency in the findings (and hence methodological approach) when Census 2021-22 data became available in Autumn 2024.

    Recreating SEISA using the latest Census records across all nations, we found there was a high level of stability in the results between the 2011 and 2021-22 Census collections. For instance, our summary page shows the steadiness in the associations between SEISA and income, housing, family composition and health, with an example of this provided below.

    The association between SEISA and family composition in Census 2011 and 2021-22

    Over the past twelve months, we’ve been gratified to see applications of SEISA in the higher education sector and beyond. We’ve had feedback on how practitioners are using SEISA to support their widening participation activities in higher education and interest from councils working on equality of opportunity in early years education. The measure is now available via the Local Insight database used by local government and charities to source data for their work.

    It’s evident therefore that SEISA has the potential to help break down barriers to opportunity across the UK and is already being deployed by data users to support their activities. The demonstrable value of SEISA and its consistency following the update to Census 2021-22 data mean that we can now remove the ‘in development’ badge and label SEISA as an official statistic.

    View the data for SEISA based on the Census 2021-22 collection, alongside a more detailed insight into why SEISA is now an official statistic, on the HESA website.

    Please feel free to submit any feedback you have on SEISA to [email protected].

    Read HESA’s latest research releases and if you would like to be kept updated on future publications, you can sign-up to our mailing list.

    Source link

  • Politics and international relations has grown over the last decade – but unevenly

    Politics and international relations has grown over the last decade – but unevenly

    The world seems an uncertain place to live in as we begin 2025: growing levels of conflict and instability across the globe, democratic institutions under pressure, and civic infrastructure being tested by the raging unpredictability of the natural world. Has there ever been a more appropriate time for people, young or old, to study politics? Has there ever been a time when we have been more in need of the expertise of political scientists, theorists, and scholars of international relations to help us make sense of this complex and changing world?

    It feels timely, therefore, that we at the British Academy are publishing a report on the provision of politics and international relations in the UK. This report is the latest in a series of state of the discipline reports from our SHAPE Observatory. It aims to take the temperature of the discipline by examining the size and shape of the sector and observing key trends over the past decade or so.

    Going for growth

    One of the key themes that emerged from our report was expansion. Compared to 2011–12, there has been a 20 per cent increase in first degree students and a 41 per cent increase in postgraduate taught students taking politics and international relations. The number of academic staff has also increased by 52 per cent since 2012–13.

    With this expansion has come diversification, both among students and staff. There are now more female students studying this traditionally male-dominated subject and the proportion of first degree students from minoritised ethnic backgrounds has increased by eight percentage points since 2011–12. Over the same period, the number of international students from outside the EU has more than doubled. The workforce is also becoming more international, with notable increases in staff from outside Europe and North America.

    All of this is positive, as it shows there is still strong demand for the discipline in the UK and that both students and scholars want to come here from around the world to work and study. In interviews we conducted with academic staff, there was a strong emphasis on the positive effects of this diversification. It was argued that the learning and research environment is enriched by bringing a range of perspectives and backgrounds onto campus.

    Uneven development

    But when you scratch beneath the surface of the aggregate numbers, another picture starts to emerge. When we looked at student numbers by institution, it became clear that changes have been highly uneven across the sector since 2011–12. A stark difference was observable, for example, between the average change in student numbers at Russell Group institutions and the rest of the sector:

    Number of institutions Mean change in student numbers
    Russell Group 23 320.2
    Pre-92 other 39 -24.7
    Post-92 51 -16.8

    Mean change (FPE) in first degree student numbers, 2011–12 to 2021–22

    So, if this is a story of expansion, it is really a story of a select few institutions that have expanded remarkably, while the rest of the sector has seen its share of politics and international relations students dwindle over the past few years.

    This pattern will be familiar to some at the institutional level, particularly in England and Wales, where caps on student numbers have been removed. Yet the overall institutional picture can mask ups and downs in recruitment within the same university, along with any restructuring of departments and course portfolios. Isolating changes in student numbers for a single disciplinary area is therefore very revealing.

    Growing pains

    So what are the implications of these changes? More students are engaging with the discipline, and in England and Wales more are able to attend their first-choice destination. Those working within departments at research-intensive universities may argue that the expansion of their department has preserved a degree of pluralism in research activity and practice. The UK has a proud history of political theory, for example, and this sub-field continues to carve out a notable space in the disciplinary landscape – something not mirrored in other leading research nations.

    However, the divergence in recruitment has clearly had a destabilising impact on some politics departments. The redistribution of students across the UK has real-world consequences, leading in some instances to internal restructuring and even departmental closures. Amid gloomy forecasts for the sector, mounting financial pressures, and announcements of course closures in all manner of disciplines, the risk of an uneven balance of course provision has come into sharp focus.

    Mind the gap(s)

    It is in this context that the British Academy recently launched a new map showing changing SHAPE provision in UK higher education over a decade. The picture for politics and international relations is broadly positive, with good coverage across the country at least at the regional level. However, when you exclude students with prior qualifications above the average tariff for the discipline, there is a notable absence of people studying single honours degrees in politics and international relations in the central belt of Scotland.

    The question of access to the discipline is an important one that deserves more detailed exploration at a local level. Many of the institutions that have seen a drop in their student intake are the same universities that would argue they are most adept at reaching local communities where access to higher education is lowest. Moreover, they would likely contend that they are best placed to support these students to succeed at university.

    In an era where more of the learning experience is being digitised and moved online, and where the numbers of commuter students are increasing, perhaps the concentration of politics and international relations students at fewer universities is less of an issue. Institutions are being asked to do more with less, and from a technocratic perspective, this can create economies of scale. Whether this is in the long-term interest of students is questionable. Moreover, ever-concentrating provision does seem antithetical to the notion of addressing regional inequalities, and it runs counter to the government’s ambitions to boost local R&D.

    A question of sustainability

    The question that emerges is not whether this is a problem, but whether it is sustainable.

    There is a great deal of discussion about how current disruption in higher education will spill over into the research base. When we interviewed those working in the field, the diversity of the politics and international relations sector was identified as a key strength, and as one of the elements that contributes to its enviable reputation around the globe. Once a department is gone, it is very hard to reestablish.

    In these volatile times, facing global challenges, politics and international relations has so much to offer both students and wider society. Let’s hope the discipline continues to thrive here in the UK for many years to come.

    Source link

  • Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)

    Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


    This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

    The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.

    Starting the case.

    The coverage reports that:

    all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 

    Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.

    I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.

    Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.

    Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.

    Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.

    Denied it.

    The article continues that Yang, the student:

    denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 

    Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.

    Does anyone actually read those things?

    Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”

    With friends like these.

    Also interesting is that, according to the coverage:

    His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 

    That would be a fair point except that the next paragraph is:

    Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 

    That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.

    That guy says:

    “I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”

    Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.

    As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.

    You know what I am going to say.

    To me, that entire Dowd diversion is mostly funny.

    More evidence.

    Back on track, we get even more detail, such as that the exam in question was:

    an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 

    Exam graders argued the AI use was obvious enough. Yang disagrees. 

    Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 

    So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.

    I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.

    Then there’s even more evidence:

    Yang also objects to professors using AI detection software to make their case at the November hearing.  

    He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 

    “Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  

    First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.

    A pattern.

    Then we get this, buried in the news coverage:

    Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  

    He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 

    In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.

    She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 

    I’m sorry — what now?

    Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:

    “re write it, make it more casual, like a foreign student write but no ai.” 

    He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?

    For that one where he left in the prompts to ChatGPT:

    the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 

    Yang was warned, in writing.

    If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.

    On the other hand, he did say he did not do it.

    Findings, review.

    Further:

    But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.

    Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:

    “PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 

    Slow clap.

    And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.

    Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.

    MPR, spare me.

    Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.

    Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:

    Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 

    “As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 

    We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.

    That’s just bad journalism.

    And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:

    it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 

    Absolute joke.

    I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.

    MPR also quotes a student who says:

    she and many other students live in fear of AI detection software.  

    “AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 

    Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.

    Two final bits.

    The MPR story shares:

    In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 

    Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.

    The MPR article wraps up that Yang:

    found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career

    Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.

    And finally:

    “Probably I should think to do something, selling potatoes on the streets or something else,” he said. 

    Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.

    Source link

  • Student-Athlete Unionization Efforts Withdrawn Prior to Second Trump Administration

    Student-Athlete Unionization Efforts Withdrawn Prior to Second Trump Administration

    by CUPA-HR | January 21, 2025

    Two efforts to extend collective bargaining rights to college athletes have been withdrawn in recent weeks in anticipation of the Trump administration taking control of the National Labor Relations Board (NLRB).

    On December 31, 2024, the Dartmouth men’s basketball team withdrew their petition to unionize. Members of the team overwhelmingly voted in March 2024 to join the Service Employees International Union (SEIU). The vote came one month after an NLRB regional director ruled that the players were employees of the college and were thus eligible to unionize.

    Additionally, on January 10, 2025, the National College Players Association (NCPA) withdrew its case against the University of Southern California, the Pac-12 Conference and the NCAA. In the original complaint, the NCPA claimed the three plaintiffs violated the National Labor Relations Act (NLRA) by misclassifying the student-athletes as non-employees. They also argued all three plaintiffs were joint employers of the student-athletes.

    Both of these efforts were pursued after NLRB General Counsel Jennifer Abruzzo issued a memorandum arguing that student-athletes are employees under the NLRA and are therefore afforded all statutory protections as prescribed under the law. The incoming administration will likely rescind the memorandum, halting or at least hindering unionization efforts among student-athletes.

    The decision to withdraw both petitions is likely meant to avoid an unfavorable outcome and precedent from a soon-to-be Republican-controlled NLRB. The SEIU explained in a statement following their withdrawal request that they sought “to preserve the precedent set by this exceptional group of young people on the men’s varsity basketball team.”

    CUPA-HR will keep members apprised of any updates related to student-athlete employment classification and unionization.



    Source link