Category: HESA’s AI Observatory

  • HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    Transformation of education

    Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning

    Rainie, L. and Watson, E. AAC&U and Elon University.

    Report from a survey of 337 college and university leaders that provides a status report on the fast-moving changes taking place on US campuses. Key data takeaways include the fact faculty use of AI tools trails significantly behind student use, more than a third of leaders surveyed perceive their institution to be below average or behind others in using GenAI tools, 59% say that cheating has increased on their campus since GenAI tools have become widely available, and 45% think the impact of GenAI on their institutions in the next five years will be more positive than negative.

    Four objectives to guide artificial intelligence’s impact on higher education

    Aldridge, S. Times Higher Education. January 27th, 2025

    The four objectives are: 1) ensure that curricula prepare students to use AI in their careers and to add human skills value to help them success in parallel of expanded use of AI; 2) employ AI-based capacities to enhance the effectiveness and value of the education delivered; 3) leverage AI to address specific pedagogical and administrative challenges; and 4) address pitfalls and shortcomings of using AI in higher ed, and develop mechanisms to anticipate and respond to emerging challenges.

    Global perspectives

    DeepSeek harnesses links with Chinese universities in talent war

    Packer, H. Times Higher Education. January 31st, 2025

    The success of artificial intelligence platform DeepSeek, which was developed by a relatively young team including graduates and current students from leading Chinese universities, could encourage more students to pursue opportunities at home amid a global race for talent, experts have predicted.

    Teaching and learning

    Trends in AI for student assessment – A roller coaster ride

    MacGregor, K. University World News. January 25th, 2025

    Insights from (and recording of) the University World News webinar “Trends in AI for student assessment”, held on January 21st. 6% of audience members said that they did not face significant challenges in using GenAI for assessment, 53% identified “verifying the accuracy and validity of AI-generated results” as a challenge, 49% said they lacked training or expertise in using GenAI tools, 45% identified “difficulty integrating AI tools within current assessment systems”, 41% were challenged in addressing ethical concerns, 30% found “ensuring fairness and reducing bias in AI-based assessments” challenging, 25% identified “protecting student data privacy and security” as a challenge, and 19% said “resistance to adopting AI-driven assessment” was challenging.

    Open access

    Charting a course for open education resources in an AI era

    Wang, T. and Mishra, S. University World News. January 24th, 2025

    The digital transformation of higher education has positioned open educational resources (OER) as essential digital public goods for the global knowledge commons. As emerging technologies, particularly artificial intelligence (AI), reshape how educational content is created, adapted and distributed, the OER movement faces both unprecedented opportunities and significant challenges in fulfilling its mission of democratising knowledge access.

    The Dubai Declaration on OER, released after the 3rd UNESCO World OER Congress held in November 2024, addresses pressing questions about AI’s role in open education.

    Source link

  • HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    Transformation of education

    The McDonaldisation of higher education in the age of AI

    Yoonil Auh, J. University World News. December 11th, 2024.

    Reflection on how AI’s impact on higher education aligns with the principles of McDonaldisation (efficiency, calculability, predictability and control), what opportunities and challenges it creates, and how institutions are responding

    Decolonization

    AI and digital neocolonialism: Unintended impacts on universities

    Yoonil Auh, J. University World News. July 12th, 2024. 

    The evolution of AI risks reinforcing neocolonial patterns, underscoring the complex ethical implications associated with their deployment and broader impact

    Workforce preparation

    As workers seek guidance on AI use, employers value skilled graduates

    Ascione, L. eCampusNews. December 9th, 2024.

    A new Wiley survey highlights that 40% of respondents struggle to understand how to integrate AI into their work and 75% lack confidence in AI use, while 34% of managers feel equipped to support AI integration

    California students want careers in AI. Here’s how colleges are meeting that demand

    Brumer, D. and Garza, J. Cal Matters. October 20th, 2024. 

    California’s governor announced the first statewide partnership with a tech firm, Nvidia, to bring AI curriculum, resources and opportunities to California’s public higher education institutions. The partnership will bring AI tools to community colleges first.

    Let’s equip the next generation of business leaders with an ethical compass

    Côrte-Real, A. Times Higher Education. October 22nd, 2024. 

    In a world driven by AI, focusing on human connections and understanding is essential for achieving success. While AI can standardize many processes, it is the unique human skills – such as empathy, creativity, and critical thinking – that will continue to set individuals and organizations apart.

    How employer demand trends across two countries demonstrate need for AI skills

    Stevens, K. EAB. October 10th, 2024. 

    Study reviewing employer demands in the US and in Ireland to better understand how demand for AI skills differ across countries, and examine if these differences are significant enough to require targeted curricular design by country

    Research

    We’re living in a world of artificial intelligence – it’s academic publishing that needs to change

    Moorhouse, B. Times Higher Education. December 13th, 2024.

    Suggestions to shift mindsets towards GenAI tools to restore trust in academic publishing

    Teaching and learning

    The AI-Generated Textbook That’s Making Academics Nervous

    Palmer, K. Inside Higher Ed. December 13th, 2024. 

    A comparative literature professor at UCLA used AI to generate the textbook for her medieval literature course notably with the aim to make course material more financially accessible to her students – but the academic community reacted strongly

    GenAI impedes student learning, hitting exam performance

    Sawahel, W. University World News. December 12th, 2024.

    A study conducted in Germany using GenAI detection systems showed that students who used GenAI scored significantly lower in essays

    The renaissance of the essay starts here

    Gordon, C. and Compton, M. Times Higher Education. December 9th, 2024. 

    A group of academics from King’s College London, the London School of Economics and Political Science, the University of Sydney and Richmond American University came together to draft a manifesto on the future of the essay in the age of AI, where they highlight problems and opportunities related to the use of essays, and propose ways to rejuvenate its use

    These AI tools can help prepare future programmers for the workplace

    Rao, R. Times Higher Education. December 9th, 2024.

    Reflection on how curricula should incorporate the use of AI tools, with a specific focus on programming courses

    The future is hybrid: Colleges begin to reimagine learning in an AI world

    McMurtrie, B. The Chronicle of Higher Education. October 3rd, 2024.

    Reflection on the state of AI integration in teaching and learning across the US

    Academic integrity

    Survey suggests students do not see use of AI as cheating

    Qiriazi, V. et al. University World News. December 11th, 2024. 

    Overview of topics discussed at the recent plenary of the Council of Europe Platform on Ethics, Transparency and Integrity in Education

    Focusing on GenAI detection is a no-win approach for instructors

    Berdahl, L. University Affairs. December 11th, 2024

    Reflection on potential equity, ethical, and workload implications of AI detection 

    The Goldilocks effect: finding ‘just right’ in the AI era

    MacCallum, K. Times Higher Education. October 28th, 2024. 

    Discussion on when AI use is ‘too much’ versus when it is ‘just right’, and how instructors can allow students to use GenAI tools while still maintaining ownership of their work

    Source link

  • HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    Good evening,

    In my last AI blog, I wrote about the recent launch of the Canadian AI Safety Institute, and other AISIs around the world. I also mentioned that I was looking forward to learn more about what would be discussed during the International Network for AI Safety meeting that would take place on November 20th-21st.

    Well, here’s the gist of it. Representatives from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US gathered last week in San Francisco to “help drive technical alignment on AI safety research, testing and guidance”. They identified their first four areas of priority:

    • Research: We plan, together with the scientific community, to advance research on risks and capabilities of advanced AI systems as well as to share the most relevant results, as appropriate, from research that advances the science of AI safety.
    • Testing: We plan to work towards building common best practices for testing advanced AI systems. This work may include conducting joint testing exercises and sharing results from domestic evaluations, as appropriate.
    • Guidance: We plan to facilitate shared approaches such as interpreting tests of advanced systems, where appropriate.
    • Inclusion: We plan to actively engage countries, partners, and stakeholders in all regions of the world and at all levels of development by sharing information and technical tools in an accessible and collaborative manner, where appropriate. We hope, through these actions, to increase the capacity for a diverse range of actors to participate in the science and practice of AI safety. Through this Network, we are dedicated to collaborating broadly with partners to ensure that safe, secure, and trustworthy AI benefits all of humanity.

    Cool. I mean, of course these priority areas are all key to the work that needs to be done… But the network does not provide concrete details on how it actuallyplans to fulfill these priority areas. I guess now we’ll just have to wait and see what actually comes out of it all.

    On another note – earlier in the Fall, one of our readers asked us if we had any thoughts about how a win from the Conservatives in the next federal election could impact the future of AI in the country. While I unfortunately do not own a crystal ball, let me share a few preliminary thoughts. 

    In May 2024, the House of Commons released the Report of the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities regarding the Implications of Artificial Intelligence Technologies for the Canadian Labour Force.

    TL;DR, the recommendations of the Standing Committee notably include: to review federal labour legislation to protect diverse workers’ rights and privacy; to collaborate with provinces, territories and labour representatives to develop a framework to support ethical adoption of AI in workplaces; to invest in AI skills training; to offer financial support to SMEs and non-profits for AI adoption; to investigate ways to utilize AI to increase operational efficiency and productivity; and for Statistics Canada to monitor labour market impacts of AI over time.

    Honestly – these are quite respectable recommendations, that could lead to significant improvements around AI implementation if they were to be followed through. 

    Going back to the question about the Conservatives, then… The Standing Committee report includes a Dissenting Report from the Conservative Party, which states that the report “does not go sufficiently in depth in how the lack of action concerning these topics [regulations around privacy, the poor state of productivity and innovation and how AI can be used to boost efficiencies, etc.] creates challenges to our ability to manage AI’s impact on the Canadian workforce”. In short, it says do more – without giving any recommendation whatsoever about what that more should be.

    On the other side, we know that one of the reasons why Bill C-27 is stagnating is because of oppositions. The Conservatives notably accused the Liberal government of seeking to “censor the Internet” – the Conservatives are opposed to governmental influence (i.e., regulation) on what can or can’t be posted online. But we also know that one significant risk of the rise of AI is the growth of disinformation, deepfakes, and more. So… maybe a certain level of “quality control” or fact-checking would be a good thing? 

    All in all, it seems like Conservatives would in theory support a growing use of AI to fight against Canada’s productivity crisis and reduce red tape. In another post previously this year, Alex has also already talked about what a Poilievre Government science policy could look like, and we both agree that the Conservatives at least appear to be committed to investing in technology. However, how they would plan to regulate the tech to ensure ethical use remains to be seen. If you have any more thoughts on that, though, I’d love to hear them. Leave a comment or send me a quick email!

    And if you want to continue discussing Canada’s role in the future of AI, make sure to register to HESA’s AI-CADEMY so you do not miss our panel “Canada’s Policy Response to AI”, where we’ll have the pleasure of welcoming Rajan Sawhney, Minister of Advanced Education (Government of Alberta), Mark Schaan, Deputy Secretary to the Cabinet on AI (Government of Canada), and Elissa Strome, Executive Director of the Pan-Canadian AI Strategy (CIFAR), and where we’ll discuss all things along the lines of what should governments’ role be in shaping the development of AI?.

    Enjoy the rest of your week-end, all!

    – Sandrine Desforges, Research Associate

    [email protected] 

    Source link