Category: Teaching & Learning

  • HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    Transformation of education

    The McDonaldisation of higher education in the age of AI

    Yoonil Auh, J. University World News. December 11th, 2024.

    Reflection on how AI’s impact on higher education aligns with the principles of McDonaldisation (efficiency, calculability, predictability and control), what opportunities and challenges it creates, and how institutions are responding

    Decolonization

    AI and digital neocolonialism: Unintended impacts on universities

    Yoonil Auh, J. University World News. July 12th, 2024. 

    The evolution of AI risks reinforcing neocolonial patterns, underscoring the complex ethical implications associated with their deployment and broader impact

    Workforce preparation

    As workers seek guidance on AI use, employers value skilled graduates

    Ascione, L. eCampusNews. December 9th, 2024.

    A new Wiley survey highlights that 40% of respondents struggle to understand how to integrate AI into their work and 75% lack confidence in AI use, while 34% of managers feel equipped to support AI integration

    California students want careers in AI. Here’s how colleges are meeting that demand

    Brumer, D. and Garza, J. Cal Matters. October 20th, 2024. 

    California’s governor announced the first statewide partnership with a tech firm, Nvidia, to bring AI curriculum, resources and opportunities to California’s public higher education institutions. The partnership will bring AI tools to community colleges first.

    Let’s equip the next generation of business leaders with an ethical compass

    Côrte-Real, A. Times Higher Education. October 22nd, 2024. 

    In a world driven by AI, focusing on human connections and understanding is essential for achieving success. While AI can standardize many processes, it is the unique human skills – such as empathy, creativity, and critical thinking – that will continue to set individuals and organizations apart.

    How employer demand trends across two countries demonstrate need for AI skills

    Stevens, K. EAB. October 10th, 2024. 

    Study reviewing employer demands in the US and in Ireland to better understand how demand for AI skills differ across countries, and examine if these differences are significant enough to require targeted curricular design by country

    Research

    We’re living in a world of artificial intelligence – it’s academic publishing that needs to change

    Moorhouse, B. Times Higher Education. December 13th, 2024.

    Suggestions to shift mindsets towards GenAI tools to restore trust in academic publishing

    Teaching and learning

    The AI-Generated Textbook That’s Making Academics Nervous

    Palmer, K. Inside Higher Ed. December 13th, 2024. 

    A comparative literature professor at UCLA used AI to generate the textbook for her medieval literature course notably with the aim to make course material more financially accessible to her students – but the academic community reacted strongly

    GenAI impedes student learning, hitting exam performance

    Sawahel, W. University World News. December 12th, 2024.

    A study conducted in Germany using GenAI detection systems showed that students who used GenAI scored significantly lower in essays

    The renaissance of the essay starts here

    Gordon, C. and Compton, M. Times Higher Education. December 9th, 2024. 

    A group of academics from King’s College London, the London School of Economics and Political Science, the University of Sydney and Richmond American University came together to draft a manifesto on the future of the essay in the age of AI, where they highlight problems and opportunities related to the use of essays, and propose ways to rejuvenate its use

    These AI tools can help prepare future programmers for the workplace

    Rao, R. Times Higher Education. December 9th, 2024.

    Reflection on how curricula should incorporate the use of AI tools, with a specific focus on programming courses

    The future is hybrid: Colleges begin to reimagine learning in an AI world

    McMurtrie, B. The Chronicle of Higher Education. October 3rd, 2024.

    Reflection on the state of AI integration in teaching and learning across the US

    Academic integrity

    Survey suggests students do not see use of AI as cheating

    Qiriazi, V. et al. University World News. December 11th, 2024. 

    Overview of topics discussed at the recent plenary of the Council of Europe Platform on Ethics, Transparency and Integrity in Education

    Focusing on GenAI detection is a no-win approach for instructors

    Berdahl, L. University Affairs. December 11th, 2024

    Reflection on potential equity, ethical, and workload implications of AI detection 

    The Goldilocks effect: finding ‘just right’ in the AI era

    MacCallum, K. Times Higher Education. October 28th, 2024. 

    Discussion on when AI use is ‘too much’ versus when it is ‘just right’, and how instructors can allow students to use GenAI tools while still maintaining ownership of their work

    Source link

  • Alternatives to the essay can be inclusive and authentic

    Alternatives to the essay can be inclusive and authentic

    I lead our largest optional final-year module – Crime, Justice and the Sex Industry – with 218 registered students for the 24–25 academic year.

    That is a lot of students to assess.

    For that module in the context, I was looking for an assessment that is inclusive, authentic, and hopefully enjoyable to write.

    I also wanted to help make students into confident writers, who make writing a regular practice with ongoing revisions.

    Inspired by the wonderful Katie Tonkiss at Aston University, I devised a letter assessment for our students.

    This was based on many different pedagogical considerations, and the acute need to teach students how to hold competing and conflicting harms and needs in tension. I consider the sex industry within the broader context of violence against women and girls.

    The sex industry, sexual exploitation, and violence against women in girls are brutal and traumatic topics that can incite divisive responses.

    Now more than ever, we need to be able to deal well with differences, to negotiate, to encourage, to reflect, and to try and move discussions forward, as opposed to shutting them down.

    Their direction and pace

    During the pandemic, I designed my module based on a non-linear pedagogy – giving students the power to navigate teaching resources at a direction and pace of their choosing.

    This has strong EDI principles, and was strongly led by my own dyslexia. I recognised that students often have constraints on their time and energy levels that mean they need to engage with learning in different patterns during different weeks – disclosing that they “binge watch” lecture videos, podcasts, or focus heavily on texts during certain weeks to block out their time.

    The approach also honours the principles of trauma-informed teaching, empowering students to navigate sensitive topics of gender-based and sexual violence.

    As I argue here with Lisa Anderson, students are now learning in a post-pandemic context with differing expectations, accessibility needs, and barriers including paid work responsibilities.

    The “full-time student” is now something of an anachronism, and education must meet this new reality – there are now more students in paid employment than not according to the 2023 HEPI/Advance HE Student Academic Experience Study.

    We have to meet students where they are, and, presently, that is in a difficult place as many students struggle with the cost of living and the battle to “have it all”.

    Students may not be asked to write exam answers or essays in their post-university life, but they will certainly be expected to write persuasively, convincingly, engaging with multiple viewpoints, and sitting with their own discomfort.

    This may take the form of webpage outputs, summaries, policy briefings, strategy documents, emails to stakeholders, campaign materials. As such, students are strongly encouraged to think about the letter from day one of semester, and consider who their recipient will be.

    They are told that it is easier to write such a letter to somebody with an opposing viewpoint – laying out your case in a respectful, warm and supportive way to try and progress the discussion. Students are also encouraged to acknowledge their own positionality, and share this if desirable, including if they can identify a thinker, document or moment that changed their position.

    Working towards change

    An example is a student who holds a position influenced by their faith, writing their letter to a faith leader or family member, acknowledging that they respect their beliefs, but strongly endorsing an approach that places harm-reduction and safety first. Finding a place of agreement and building from there, and accepting that working towards change can be a long process.

    Another example is a student who holds sex industry abolitionist views, writing to a sex worker, expressing concern and solidarity with the multiple forms of harm, stigma and violence they have experienced, including institutional violence.

    They consider how the law itself facilitates the context that makes violence more likely to occur. This is particularly pertinent at the moment as we experience a fresh wave of digital “me too” and high-profile cases of sexual violence and victim-blaming.

    In this way, students are taught to examine different documents and evidence, from legal, policy, charity briefings and statements, journal articles, books, reports, documentaries, global sex worker grassroots initiatives, news reports, social media campaigns and footage, art, literature, etc.

    By engaging with different types of sources, we challenge the idea that academic material is top of the knowledge hierarchy, and platform the voices who often go unheard, including sex workers globally.

    The students cross-reference resources, and identify forms of harm, violence and discrimination that may not make official narratives. This also encourages students to be active members of our community, contributing to each workshop either verbally or digitally, in real-time, or asynchronously via our class-wide google doc.

    Students are also taught that it is OK to not have the definitive answer, and to instead ask the recipient to help them further their knowledge. They are also taught that it is ok to change our position and recommendations depending on what evidence we encounter.

    Above all, they are taught that two things can be true at the same time: something might be harmful, and the response to it awful too.

    Students responded overwhelmingly in favour of this approach, and many expressed a new-found love of writing, and reading. Engaging with many different mediums including podcasts, tweets, reels, history talks, art exhibitions, gave them confidence in their reading and study skills.

    Putting choice and enjoyment in the curriculum is not about “losing academic rigour”, it is about firing students up for their topics of study, and ensuring they can communicate powerfully to different audiences using different tools.

    Dear me, I wish we had tried this assessment sooner. xoxo

    Source link

  • HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    Good evening,

    In my last AI blog, I wrote about the recent launch of the Canadian AI Safety Institute, and other AISIs around the world. I also mentioned that I was looking forward to learn more about what would be discussed during the International Network for AI Safety meeting that would take place on November 20th-21st.

    Well, here’s the gist of it. Representatives from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US gathered last week in San Francisco to “help drive technical alignment on AI safety research, testing and guidance”. They identified their first four areas of priority:

    • Research: We plan, together with the scientific community, to advance research on risks and capabilities of advanced AI systems as well as to share the most relevant results, as appropriate, from research that advances the science of AI safety.
    • Testing: We plan to work towards building common best practices for testing advanced AI systems. This work may include conducting joint testing exercises and sharing results from domestic evaluations, as appropriate.
    • Guidance: We plan to facilitate shared approaches such as interpreting tests of advanced systems, where appropriate.
    • Inclusion: We plan to actively engage countries, partners, and stakeholders in all regions of the world and at all levels of development by sharing information and technical tools in an accessible and collaborative manner, where appropriate. We hope, through these actions, to increase the capacity for a diverse range of actors to participate in the science and practice of AI safety. Through this Network, we are dedicated to collaborating broadly with partners to ensure that safe, secure, and trustworthy AI benefits all of humanity.

    Cool. I mean, of course these priority areas are all key to the work that needs to be done… But the network does not provide concrete details on how it actuallyplans to fulfill these priority areas. I guess now we’ll just have to wait and see what actually comes out of it all.

    On another note – earlier in the Fall, one of our readers asked us if we had any thoughts about how a win from the Conservatives in the next federal election could impact the future of AI in the country. While I unfortunately do not own a crystal ball, let me share a few preliminary thoughts. 

    In May 2024, the House of Commons released the Report of the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities regarding the Implications of Artificial Intelligence Technologies for the Canadian Labour Force.

    TL;DR, the recommendations of the Standing Committee notably include: to review federal labour legislation to protect diverse workers’ rights and privacy; to collaborate with provinces, territories and labour representatives to develop a framework to support ethical adoption of AI in workplaces; to invest in AI skills training; to offer financial support to SMEs and non-profits for AI adoption; to investigate ways to utilize AI to increase operational efficiency and productivity; and for Statistics Canada to monitor labour market impacts of AI over time.

    Honestly – these are quite respectable recommendations, that could lead to significant improvements around AI implementation if they were to be followed through. 

    Going back to the question about the Conservatives, then… The Standing Committee report includes a Dissenting Report from the Conservative Party, which states that the report “does not go sufficiently in depth in how the lack of action concerning these topics [regulations around privacy, the poor state of productivity and innovation and how AI can be used to boost efficiencies, etc.] creates challenges to our ability to manage AI’s impact on the Canadian workforce”. In short, it says do more – without giving any recommendation whatsoever about what that more should be.

    On the other side, we know that one of the reasons why Bill C-27 is stagnating is because of oppositions. The Conservatives notably accused the Liberal government of seeking to “censor the Internet” – the Conservatives are opposed to governmental influence (i.e., regulation) on what can or can’t be posted online. But we also know that one significant risk of the rise of AI is the growth of disinformation, deepfakes, and more. So… maybe a certain level of “quality control” or fact-checking would be a good thing? 

    All in all, it seems like Conservatives would in theory support a growing use of AI to fight against Canada’s productivity crisis and reduce red tape. In another post previously this year, Alex has also already talked about what a Poilievre Government science policy could look like, and we both agree that the Conservatives at least appear to be committed to investing in technology. However, how they would plan to regulate the tech to ensure ethical use remains to be seen. If you have any more thoughts on that, though, I’d love to hear them. Leave a comment or send me a quick email!

    And if you want to continue discussing Canada’s role in the future of AI, make sure to register to HESA’s AI-CADEMY so you do not miss our panel “Canada’s Policy Response to AI”, where we’ll have the pleasure of welcoming Rajan Sawhney, Minister of Advanced Education (Government of Alberta), Mark Schaan, Deputy Secretary to the Cabinet on AI (Government of Canada), and Elissa Strome, Executive Director of the Pan-Canadian AI Strategy (CIFAR), and where we’ll discuss all things along the lines of what should governments’ role be in shaping the development of AI?.

    Enjoy the rest of your week-end, all!

    – Sandrine Desforges, Research Associate

    [email protected] 

    Source link