Category: Institutions

  • Legacy Admissions Hit Historic Low as More States Ban Practice at U.S. Colleges

    Legacy Admissions Hit Historic Low as More States Ban Practice at U.S. Colleges

    Legacy preferences in college admissions have plummeted to their lowest recorded level, with just 24% of four-year colleges still considering family alumni status in admissions decisions, according to a comprehensive new report from Education Reform Now. The dramatic decline signals a potential end to a controversial practice that critics have long condemned as perpetuating inequality in higher education.

    The report, authored by James Murphy, director of Career Pathways and Postsecondary Policy, found that 420 institutions continue to provide admissions advantages to children of alumni, marking a sharp decline from previous years. The practice has seen particularly steep drops since 2015, when nearly half of all four-year colleges considered legacy status. Between 2022 and 2023 alone, 92 colleges abandoned legacy preferences, representing an 18% decrease that coincided with the Supreme Court’s landmark decision to ban race-conscious admissions.

    This decline stems from both voluntary institutional decisions and new state legislation. In 2024, California, Illinois, Maryland and Virginia joined Colorado in restricting legacy admissions through state laws. The report indicates that 86% of colleges that ended legacy consideration did so voluntarily, while 14% were required by state legislation. Several more states are expected to introduce similar legislation in 2025.

    Legacy preferences remain most entrenched at selective private institutions, particularly in the Northeast. More than half of colleges that admit 25% or fewer applicants still provide advantages to alumni children. The practice is now rare at public institutions, with just 11% still considering legacy status. In 24 states, no public colleges provide legacy preferences at all. New York stands out as having the highest concentration of colleges maintaining legacy admissions, with one in seven U.S. institutions still using the practice located in the Empire State.

    The report challenges several common defenses of legacy admissions, including arguments that they help build campus community or are necessary for fundraising. It cites evidence that 76% of colleges successfully foster campus communities without legacy preferences, and questions whether wealthy institutions with multi-billion dollar endowments truly need to “trade admissions advantages for money.”

    The analysis also addresses claims that ending legacy admissions could hurt diversity, particularly following the Supreme Court’s affirmative action ruling. The report argues that legacy preferences disproportionately benefit white and wealthy applicants, citing research showing that Asian American applicants face significantly lower odds of admission compared to white applicants with similar qualifications at selective institutions. According to one study, Asian American applicants had 28% lower odds of attending elite schools than white applicants with similar academic and extracurricular qualifications.

    The report suggests that Congress could potentially impose additional endowment taxes on universities that maintain legacy preferences while offering reduced penalties to institutions that increase enrollment of Pell Grant recipients, community college transfers, and veterans. This approach would create financial incentives for institutions to abandon the practice.

    “The shame of belonging to this group of colleges that think children of alumni have somehow earned an extra advantage in admissions is likely to push more colleges to drop the practice,” Murphy writes. “This is not a club that most colleges belong to or will want to belong to.”

    The report also criticizes the Common Application for potentially enabling legacy admissions by requiring all applicants to identify where their parents earned bachelor’s degrees, even though this information is irrelevant for more than three-quarters of colleges. The report suggests that removing this question would be a significant step toward making college admissions more equitable.

    “Ultimately, the reason to eliminate legacy preferences is not to achieve some other goal,” the report concludes. “The reason to get rid of them is that they are profoundly unfair and make a mockery of merit. Legacy preferences award some of the most advantaged students an additional advantage in the college admissions process on the basis of ancestry alone.”

    Source link

  • Franklin & Marshall College Names Dr. Andrew Rich as 17th President

    Franklin & Marshall College Names Dr. Andrew Rich as 17th President

    FDr. Andy Richranklin & Marshall College has appointed Dr. Andrew “Andy” Rich, current dean of the Colin Powell School for Civic and Global Leadership at City College of New York (CCNY), as its 17th president following a unanimous vote by the Board of Trustees. Rich will take office in July, succeeding outgoing president Dr. Barbara K. Altmann, who has led the institution since 2018.

    During his six-year tenure at the Colin Powell School, Rich demonstrated exceptional ability in institutional growth and fundraising, according to officials at the private school located in Lancaster, Pennsylvania. He spearheaded a 40 percent enrollment increase, bringing the student body to 4,000, while simultaneously launching innovative student success initiatives. Under his leadership, the school established eight new fellowship programs and created an Office of Student Success offering comprehensive mentoring, professional development, and career services.

    One of Rich’s notable achievements at CCNY was the formation of a Public Service Career Hub, which more than doubled student placement in public service internships and jobs. The initiative’s success earned the 2023 Exemplary Model Award from the American Association of University Administrators. Rich also led a transformative fundraising campaign that generated over $85 million in new investments for scholarships, student services, faculty positions, and academic initiatives.

    “I am excited to become an F&M Diplomat,” said Rich. “For more than 235 years, Franklin & Marshall has been a beacon for excellence in liberal arts education. We prepare students for fulfilling lives, inspiring them to achievements that enrich every sector of society.”

    Prior to his role at CCNY, Rich served as CEO and executive secretary of the Harry S. Truman Scholarship Foundation from 2011 to 2019, where he oversaw the prestigious federal program supporting future public service leaders. His connection to F&M includes oversight of two recent Truman Scholars from the college: Makaila Ranges, a 2022 graduate and Akbar Hossain, who graduated in 2013. Rich also served as president and CEO of the Roosevelt Institute, a national think tank and leadership development organization, from 2009 to 2011.

    Eric Noll, chair of the College’s Board of Trustees, praised Rich’s appointment:

    “He will build on Barbara Altmann’s successful presidency with his sharp strategic sensibilities and deep appreciation for our excellent liberal arts college and its importance in our society’s future,” he said.

    Rich’s academic credentials include a bachelor’s degree in political science from the University of Richmond, where he was awarded a Truman Scholarship, and a doctorate in political science from Yale University. He has taught at both CCNY and Wake Forest University and is known for his scholarship on think tanks and foundations in American politics, having authored Think Tanks, Public Policy, and the Politics of Expertise.

    Source link

  • HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    Transformation of education

    Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning

    Rainie, L. and Watson, E. AAC&U and Elon University.

    Report from a survey of 337 college and university leaders that provides a status report on the fast-moving changes taking place on US campuses. Key data takeaways include the fact faculty use of AI tools trails significantly behind student use, more than a third of leaders surveyed perceive their institution to be below average or behind others in using GenAI tools, 59% say that cheating has increased on their campus since GenAI tools have become widely available, and 45% think the impact of GenAI on their institutions in the next five years will be more positive than negative.

    Four objectives to guide artificial intelligence’s impact on higher education

    Aldridge, S. Times Higher Education. January 27th, 2025

    The four objectives are: 1) ensure that curricula prepare students to use AI in their careers and to add human skills value to help them success in parallel of expanded use of AI; 2) employ AI-based capacities to enhance the effectiveness and value of the education delivered; 3) leverage AI to address specific pedagogical and administrative challenges; and 4) address pitfalls and shortcomings of using AI in higher ed, and develop mechanisms to anticipate and respond to emerging challenges.

    Global perspectives

    DeepSeek harnesses links with Chinese universities in talent war

    Packer, H. Times Higher Education. January 31st, 2025

    The success of artificial intelligence platform DeepSeek, which was developed by a relatively young team including graduates and current students from leading Chinese universities, could encourage more students to pursue opportunities at home amid a global race for talent, experts have predicted.

    Teaching and learning

    Trends in AI for student assessment – A roller coaster ride

    MacGregor, K. University World News. January 25th, 2025

    Insights from (and recording of) the University World News webinar “Trends in AI for student assessment”, held on January 21st. 6% of audience members said that they did not face significant challenges in using GenAI for assessment, 53% identified “verifying the accuracy and validity of AI-generated results” as a challenge, 49% said they lacked training or expertise in using GenAI tools, 45% identified “difficulty integrating AI tools within current assessment systems”, 41% were challenged in addressing ethical concerns, 30% found “ensuring fairness and reducing bias in AI-based assessments” challenging, 25% identified “protecting student data privacy and security” as a challenge, and 19% said “resistance to adopting AI-driven assessment” was challenging.

    Open access

    Charting a course for open education resources in an AI era

    Wang, T. and Mishra, S. University World News. January 24th, 2025

    The digital transformation of higher education has positioned open educational resources (OER) as essential digital public goods for the global knowledge commons. As emerging technologies, particularly artificial intelligence (AI), reshape how educational content is created, adapted and distributed, the OER movement faces both unprecedented opportunities and significant challenges in fulfilling its mission of democratising knowledge access.

    The Dubai Declaration on OER, released after the 3rd UNESCO World OER Congress held in November 2024, addresses pressing questions about AI’s role in open education.

    Source link

  • HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    Transformation of education

    The McDonaldisation of higher education in the age of AI

    Yoonil Auh, J. University World News. December 11th, 2024.

    Reflection on how AI’s impact on higher education aligns with the principles of McDonaldisation (efficiency, calculability, predictability and control), what opportunities and challenges it creates, and how institutions are responding

    Decolonization

    AI and digital neocolonialism: Unintended impacts on universities

    Yoonil Auh, J. University World News. July 12th, 2024. 

    The evolution of AI risks reinforcing neocolonial patterns, underscoring the complex ethical implications associated with their deployment and broader impact

    Workforce preparation

    As workers seek guidance on AI use, employers value skilled graduates

    Ascione, L. eCampusNews. December 9th, 2024.

    A new Wiley survey highlights that 40% of respondents struggle to understand how to integrate AI into their work and 75% lack confidence in AI use, while 34% of managers feel equipped to support AI integration

    California students want careers in AI. Here’s how colleges are meeting that demand

    Brumer, D. and Garza, J. Cal Matters. October 20th, 2024. 

    California’s governor announced the first statewide partnership with a tech firm, Nvidia, to bring AI curriculum, resources and opportunities to California’s public higher education institutions. The partnership will bring AI tools to community colleges first.

    Let’s equip the next generation of business leaders with an ethical compass

    Côrte-Real, A. Times Higher Education. October 22nd, 2024. 

    In a world driven by AI, focusing on human connections and understanding is essential for achieving success. While AI can standardize many processes, it is the unique human skills – such as empathy, creativity, and critical thinking – that will continue to set individuals and organizations apart.

    How employer demand trends across two countries demonstrate need for AI skills

    Stevens, K. EAB. October 10th, 2024. 

    Study reviewing employer demands in the US and in Ireland to better understand how demand for AI skills differ across countries, and examine if these differences are significant enough to require targeted curricular design by country

    Research

    We’re living in a world of artificial intelligence – it’s academic publishing that needs to change

    Moorhouse, B. Times Higher Education. December 13th, 2024.

    Suggestions to shift mindsets towards GenAI tools to restore trust in academic publishing

    Teaching and learning

    The AI-Generated Textbook That’s Making Academics Nervous

    Palmer, K. Inside Higher Ed. December 13th, 2024. 

    A comparative literature professor at UCLA used AI to generate the textbook for her medieval literature course notably with the aim to make course material more financially accessible to her students – but the academic community reacted strongly

    GenAI impedes student learning, hitting exam performance

    Sawahel, W. University World News. December 12th, 2024.

    A study conducted in Germany using GenAI detection systems showed that students who used GenAI scored significantly lower in essays

    The renaissance of the essay starts here

    Gordon, C. and Compton, M. Times Higher Education. December 9th, 2024. 

    A group of academics from King’s College London, the London School of Economics and Political Science, the University of Sydney and Richmond American University came together to draft a manifesto on the future of the essay in the age of AI, where they highlight problems and opportunities related to the use of essays, and propose ways to rejuvenate its use

    These AI tools can help prepare future programmers for the workplace

    Rao, R. Times Higher Education. December 9th, 2024.

    Reflection on how curricula should incorporate the use of AI tools, with a specific focus on programming courses

    The future is hybrid: Colleges begin to reimagine learning in an AI world

    McMurtrie, B. The Chronicle of Higher Education. October 3rd, 2024.

    Reflection on the state of AI integration in teaching and learning across the US

    Academic integrity

    Survey suggests students do not see use of AI as cheating

    Qiriazi, V. et al. University World News. December 11th, 2024. 

    Overview of topics discussed at the recent plenary of the Council of Europe Platform on Ethics, Transparency and Integrity in Education

    Focusing on GenAI detection is a no-win approach for instructors

    Berdahl, L. University Affairs. December 11th, 2024

    Reflection on potential equity, ethical, and workload implications of AI detection 

    The Goldilocks effect: finding ‘just right’ in the AI era

    MacCallum, K. Times Higher Education. October 28th, 2024. 

    Discussion on when AI use is ‘too much’ versus when it is ‘just right’, and how instructors can allow students to use GenAI tools while still maintaining ownership of their work

    Source link

  • HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    Good evening,

    In my last AI blog, I wrote about the recent launch of the Canadian AI Safety Institute, and other AISIs around the world. I also mentioned that I was looking forward to learn more about what would be discussed during the International Network for AI Safety meeting that would take place on November 20th-21st.

    Well, here’s the gist of it. Representatives from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US gathered last week in San Francisco to “help drive technical alignment on AI safety research, testing and guidance”. They identified their first four areas of priority:

    • Research: We plan, together with the scientific community, to advance research on risks and capabilities of advanced AI systems as well as to share the most relevant results, as appropriate, from research that advances the science of AI safety.
    • Testing: We plan to work towards building common best practices for testing advanced AI systems. This work may include conducting joint testing exercises and sharing results from domestic evaluations, as appropriate.
    • Guidance: We plan to facilitate shared approaches such as interpreting tests of advanced systems, where appropriate.
    • Inclusion: We plan to actively engage countries, partners, and stakeholders in all regions of the world and at all levels of development by sharing information and technical tools in an accessible and collaborative manner, where appropriate. We hope, through these actions, to increase the capacity for a diverse range of actors to participate in the science and practice of AI safety. Through this Network, we are dedicated to collaborating broadly with partners to ensure that safe, secure, and trustworthy AI benefits all of humanity.

    Cool. I mean, of course these priority areas are all key to the work that needs to be done… But the network does not provide concrete details on how it actuallyplans to fulfill these priority areas. I guess now we’ll just have to wait and see what actually comes out of it all.

    On another note – earlier in the Fall, one of our readers asked us if we had any thoughts about how a win from the Conservatives in the next federal election could impact the future of AI in the country. While I unfortunately do not own a crystal ball, let me share a few preliminary thoughts. 

    In May 2024, the House of Commons released the Report of the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities regarding the Implications of Artificial Intelligence Technologies for the Canadian Labour Force.

    TL;DR, the recommendations of the Standing Committee notably include: to review federal labour legislation to protect diverse workers’ rights and privacy; to collaborate with provinces, territories and labour representatives to develop a framework to support ethical adoption of AI in workplaces; to invest in AI skills training; to offer financial support to SMEs and non-profits for AI adoption; to investigate ways to utilize AI to increase operational efficiency and productivity; and for Statistics Canada to monitor labour market impacts of AI over time.

    Honestly – these are quite respectable recommendations, that could lead to significant improvements around AI implementation if they were to be followed through. 

    Going back to the question about the Conservatives, then… The Standing Committee report includes a Dissenting Report from the Conservative Party, which states that the report “does not go sufficiently in depth in how the lack of action concerning these topics [regulations around privacy, the poor state of productivity and innovation and how AI can be used to boost efficiencies, etc.] creates challenges to our ability to manage AI’s impact on the Canadian workforce”. In short, it says do more – without giving any recommendation whatsoever about what that more should be.

    On the other side, we know that one of the reasons why Bill C-27 is stagnating is because of oppositions. The Conservatives notably accused the Liberal government of seeking to “censor the Internet” – the Conservatives are opposed to governmental influence (i.e., regulation) on what can or can’t be posted online. But we also know that one significant risk of the rise of AI is the growth of disinformation, deepfakes, and more. So… maybe a certain level of “quality control” or fact-checking would be a good thing? 

    All in all, it seems like Conservatives would in theory support a growing use of AI to fight against Canada’s productivity crisis and reduce red tape. In another post previously this year, Alex has also already talked about what a Poilievre Government science policy could look like, and we both agree that the Conservatives at least appear to be committed to investing in technology. However, how they would plan to regulate the tech to ensure ethical use remains to be seen. If you have any more thoughts on that, though, I’d love to hear them. Leave a comment or send me a quick email!

    And if you want to continue discussing Canada’s role in the future of AI, make sure to register to HESA’s AI-CADEMY so you do not miss our panel “Canada’s Policy Response to AI”, where we’ll have the pleasure of welcoming Rajan Sawhney, Minister of Advanced Education (Government of Alberta), Mark Schaan, Deputy Secretary to the Cabinet on AI (Government of Canada), and Elissa Strome, Executive Director of the Pan-Canadian AI Strategy (CIFAR), and where we’ll discuss all things along the lines of what should governments’ role be in shaping the development of AI?.

    Enjoy the rest of your week-end, all!

    – Sandrine Desforges, Research Associate

    [email protected] 

    Source link