Category: Academic Professional Development

  • Revolutionizing University Assessments: From Essays to Portfolios

    Revolutionizing University Assessments: From Essays to Portfolios

    First posted on Substack

    The AI arms race still rages. Students will identify AI writing support tools, educators will rearm themselves with AI-aware plagiarism-detection software, and students will source apps that can bypass the detection software. Institutions are increasingly prioritising the ease with which mass assessments can be marked. Governments are revising legislation that banned ‘essay mills’ used for contract cheating to incorporate restrictions on Generative AI.

    Students may find themselves asked to handwrite their submissions to avoid the temptation to use fully integrated generative AI tools in their word processing software. Some shrewd students will (re)discover software that takes your typed text, AI-generated or otherwise, and turn it into a version that mimics your handwriting (calligraphr.com).

    Many institutions have already been thinking about this for years. In 2021, UNESCO issued a useful report, ‘AI and Education’, which remains a foundational reader for any institutional leader who wants to be able to go head-to-head with their head of technology services, and to be informed when members of the Senate repeat some of the dystopian viewpoints gleaned from their social media feeds.

    What we need is a revolution in the design of university assessments. This also means some radical redesign of programs and courses. Institutions should be redefining what assessment looks like. Not just because much of the assessments currently on offer lend themselves too easily to plagiarism, contract-cheating, or AI-generated responses, but also because they are bad assessments. Bad assessments, designed loosely to assess very badly written learning outcomes.

    Many universities face a fundamental problem: their entire assessment philosophy (if they have one) remains rooted in a measuring psychosis. One that sees its self-justification in measuring what the learner knows now, rather than what they could do before they undertook a specific course or degree, and how much they have improved. Each course is assessed against its own learning outcomes (where these exist and assuming they are actually well-formed). The odds are that these outcomes are heavily weighted towards the cognitive outcomes and have not moved beyond Bloom’s standard pyramid.

    Rarely are these course-level outcomes accurately mapped and weighted against programme outcomes. A student should always be able to match the assessment they are asked to complete against a set of skills expected as the outcome for a specific course. These skills need to be clearly mapped onto programme outcomes. Each assessment task is assessed against some formulation of marking rubrics or guides, often with multiple markers making controlled, monitored judgements to attempt to ensure just (not standardised) marks.

    Unfortunately, it remains common to see all of these cohort-marked assessments plotted against a bell curve, and top marks to be ‘brought back into line’ where convention dictates.

    Why? Surely the purpose of undertaking a university degree is self-improvement. There is a minimum threshold that I must meet, a pass mark, that allows me to demonstrate that I am capable of certain things, certain abilities or skills. But beyond that? If I got a second-class honours degree and my friend got a first, does that mean they knowmore than I do? Currently, given the emphasis on cognitive skills and knowledge, one can fairly say yes. Does it mean they are necessarily more proficient out there in the big, wide world? Probably not. We are simply not assessing the skills and abilities that most graduates need.

    I advocate for Universities to abandon isolated course-specific assessments in favour of programme-wide portfolio assessments. These are necessarily ipsative, capturing students’ disparate strengths and weaknesses relative to their own performance over time. There may be pass/fail assessments as part of any portfolio, but there are also opportunities for annual or thematic synoptic assessments. Students would be encouraged to draw on their contributions to the university drama club, the volleyball team, or their part-time work outside the university.

    I undertook a short consultancy last year for a university that has been a bit freaked out by the advent of Generative AI. The head of department had a moment of realisation that the vast majority of the degree assessment was based entirely on knowledge recovery and transmission. In reality, of course, their assessment strategy has been flawed long before the advent of ChatGPT. They’ve struggled with plagiarism detection, itself imperfect, obviously, and with reproducing answers that differ only at the margins between students.

    The existing assessment certainly made it easier for them to have external markers looking for specific words to match a pro forma answer. No educational developer worth their salt would have looked at this particular assessment strategy and thought it was in any way valid. The perceived threat to assessment integrity does offer an opportunity for those who are still naive enough to think that essay questions demonstrate anything other than the ability to regurgitate existing knowledge and, at its best, an ability to write in a compelling way. Unless such writing is a skill that is required of the programme of study, it’s a fairly pointless exercise.

    Confidentiality means I don’t wish to identify the organisation, let alone the department, in question. What became abundantly clear is that the assessment strategy had been devised as the programme grew. As they increased the number of students, they had contracted out significant amounts of the marking. This led to a degree of removal from individual students’ actual experiences.

    Surely one can see that it will become pointless to ask students to answer knowledge-based questions beyond a diagnostic exercise early in each course or programme.

    So what’s the alternative? With very rare exceptions, the vast majority of tertiary students will have lived for at least 18 years. They have life experiences that make their perspectives different from those of their fellow students. Suppose we can design our assessments around individuals’ personal epistemology, culture, and experience. We have a chance to differentiate between them. We can build assessment incrementally within specific courses and programmes. Each course in a programme can build on previous courses. In the case of this particular client, I suggested that eliminating as many electives as possible and narrowing the options would not deter applicants and would make the design of assessment strategies within the programme more coherent.

    Developing a personal portfolio of evidence throughout a programme of study gives students both a sense of ownership over their own learning and potentially a resource they will continue to augment once they graduate. The intention is to develop an incremental assessment approach. Students in the third year of studies would be asked to review coursework from previous years, for example. Students could be asked to comment and provide feedback on students in earlier years within the same programme. Blending the ipsative nature of assessments with credit-bearing assessment tasks is the crucial skill now required of learning designers.

    Maybe it is now a good time for you to review your learning outcomes and ask whether you are assessing skills and attributes?


    Paid subscribers will have access to assessment design tools

    Source link

  • Empowering Learning in the Age of Generative AI: A Manifesto

    Empowering Learning in the Age of Generative AI: A Manifesto

    FIRST PUBLISHED ON SUBSTACK

    In 2007, I was invited to deliver a ‘keynote’ at the opening of the National e-Learning Centre in Zagreb, Croatia. I was then the Head of e-Learning and the Head of the Centre for Learning Development at the University of Hull. I argued then that the “e” in e-learning should stand for Empowerment, not Electronic. In short, I outlined that ‘e’ stood for many facets of learning, most of which were misunderstood. These different dimensions will be explored for Paid Subscribers later this week.

    Looking back now, 19 years later, at the current landscape, I can say without hesitation that despite these recent years of “progress,” many institutions have used technology to automate compliance rather than liberate and empower learners. In reality, the ‘digital age’ has transformed the educational experience of learners and teachers alike, but in what ways has it actually enhanced it?

    The current candidate as a significant catalyst for change (as the advent of Wikipedia, Virtual Learning Environments, and ZOOM have been before) is the advent of Generative AI and automated grading. This issue certainly presents a range of questions worth discussing in departmental meetings and academic development workshops. I would start the debate with a simple question: Is the student more empowered today, or just more monitored?

    This Substack is called the ‘Educational Architects’ because it is aimed at providing the tools, blueprints and frameworks for designing and building learning resources and experiences. It is not just about tips and tricks for ‘classroom’ teaching (although there will be some of that), it is about personal practice, programme team vision, departmental management and institutional leadership.

    We need a “New Architecture”, one that isn’t about the latest Virtual Learning Environment (VLE) software or the need to scale up proctored exam invigilation, but about the cognitive and affective structures we build for our students. We need structures that empower learners to develop their psychomotor abilities, their metacognitive awareness, and the very human interpersonal skills.

    A major focus of this Substack is the need for effective, purposeful and beautifully designed programmes and courses. Course design has fallen into yet another “Box-Ticking” crisis. Regardless of whether validation services are internal to the institution or regional or national, too many courses aim for the minimum required to get approval. Why do so many course designs default to the path of least resistance?

    When we design for administrative ease, we sacrifice deep engagement. We produce graduates who are good at “doing school” but unprepared for the “messy middle” of professional practice. My personal mission has always been to broaden and heighten individual academics and learning designers’ appreciation of a richer educational landscape. Over the last 30 years, I have developed an 8-Stage Learning Design Framework (8-SLDF), Five Educational Taxonomies, a model for Student-Owned Learning Engagement, the Digital Artefacts for Learner Engagement framework and tools for identifying learners’ culturally centred epistemological orientations (POISE). I will be sharing this research and scholarship with you on this Substack.

    Why This Substack?

    This isn’t intended to be just another educational blog. This is a workspace for “Educational Architects” who want to reclaim the soul of pedagogy. It is a space where practical guidance is questioned, discussed and refined. Paid subscribers will receive toolkits and workbooks on a wide range of practical learning and teaching activities, and based on their feedback, these will be refined and re-shared with them.

    Subscribers will get one practical and strategic post each week, with paid subscribers receiving tactical tools or resources to help them build their practice with the support of colleagues and me.

    Call to Action

    My question to you, one I ask myself continuously, is “If you could tear down one ‘wall’ in your current institutional design, what would it be?

    This could be a departmental structural issue, silos between disciplines, or compliance structures that are overly restrictive or entirely absent. It could be the lack of support for innovative teaching practices or the insistence on teaching to a prescribed ‘workbook’. It might also be poor assessment design or the inability to revise and update the curriculum. Whatever your structural challenge, I want to hear from you.

    Become a paid supporter and gain access to practical resources. This week, it is an enhanced version of the original 2007 keynote, with a transcript and prompts for professional questions.


    Simon’s Educational Architects Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber

    Source link

  • Anticipating Impact of Educational Governance – Sijen

    Anticipating Impact of Educational Governance – Sijen

    It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.

    Understanding media representations of “Artificial Intelligence”.

    Mapping types of AI in 2023

    We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.

    This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.

    In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.

    What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.

    Three ethical spheres Ethical spheres for Governing Boards to reflect on in 2023

    Ethical and Moral Implications

    Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.

    Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.

     

    Source link

  • Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    It is my privilege to serve alongside Alison Fields as co-editor of the Journal of Open, Flexible and Distance Learning, an international high-quality peer-reviewed academic journal. I also have a piece in this issue entitled ‘Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning‘.

    Issue 26 (2) of the Journal of Open, Flexible and Distance Learning (JOFDL) is now available to the world. It begins with an editorial looking at readership and research trends in the journal post-COVID, followed by a thought-provoking Invited Article about the nature of distance learning by Professor Jon Dron. This general issue follows with 7 articles on different aspects of research after COVID-19.
    Alison Fields and Simon Paul Atkinson, JOFDL Joint Editors. 

    Editorial

    Post-pandemic Trends: Readership and Research After COVID-19

    Alison Fields, Simon Paul Atkinson

    1-6

    Image of Jon Dron

    Invited Article

    Technology, Teaching, and the Many Distances of Distance Learning

    Jon Dron

    7-17

    Position Piece

    Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning

    Simon Paul Atkinson

    18-28

    Articles – Primary studies

    Images of Hulbert and Koh

    The Role of Non-Verbal Communication in Asynchronous Talk Channels ‎

    Image of Leomar Miano

    An An Initial Assessment of Soft Skills Integration in Emergency Remote Learning During the COVID-19 Pandemic: A Learners’ PerspectiveA Learners Perspective

    Image of small child at a laptop

    Supporting English Language Development of English Language Learners in Virtual Kindergarten: A Parents’ Perspective

    Image of Lockias Chitanana

    Parents’ Experience with Remote Learning during COVID-19 Lockdown in Zimbabwe

    Image of Martin Watts & Ioannis Andreadis

    First-year Secondary Students’ Perceptions of the Impact of iPad Use on Their Learning in a BYOD Secondary International School

    venn diagram for AIM

    Teaching, Engaging, and Motivating Learners Online Through Weekly, Tailored, and Relevant CommunicationAcademic Content, Information for the Course, and Motivation (AIM)


    Source link

  • Book on Writing Good Learning Outcomes – Sijen

    Book on Writing Good Learning Outcomes – Sijen

    Introducing a short guide entitled: “Writing Good Learning Outcomes and Objectives”, aimed at enhancing the learner experience through effective course design. Available at https://amazon.com/dp/0473657929

    The book has sections on the function and purpose of intended learning outcomes as well as guidance on how to write them with validation in mind. Sections explore the use of different educational taxonomies as well as some things to avoid, and the importance of context. There is also a section on ensuring your intended learning outcomes are assessable. The final section deals with how you might go about designing an entire course structure based on well-structured outcomes, breaking these outcomes down into session-level objectives that are not going to be assessed.

    #ad #education #highereducation #learningdesign #coursedesign #learningoutcomes #instructionaldesign


    Source link

  • Empower Learners for the Age of AI: a reflection – Sijen

    Empower Learners for the Age of AI: a reflection – Sijen

    During the Empower Learners for the Age of AI (ELAI) conference earlier in December 2022, it became apparent to me personally that not only does Artificial intelligence (AI) have the potential to revolutionize the field of education, but that it already is. But beyond the hype and enthusiasm there are enormous strategic policy decisions to be made, by governments, institutions, faculty and individual students. Some of the ‘end is nigh’ messages circulating on Social Media in the light of the recent release of ChatGPT are fanciful click-bait, some however, fire a warning shot across the bow of complacent educators.

    It is certainly true to say that if your teaching approach is to deliver content knowledge and assess the retention and regurgitation of that same content knowledge then, yes, AI is another nail in that particular coffin. If you are still delivering learning experiences the same way that you did in the 1990s, despite Google Search (b.1998) and Wikipedia (b.2001), I am amazed you are still functioning. What the emerging fascination about AI is delivering an accelerated pace to the self-reflective processes that all university leadership should be undertaking continuously.

    AI advocates argue that by leveraging the power of AI, educators can personalize learning for each student, provide real-time feedback and support, and automate administrative tasks. Critics argue that AI dehumanises the learning process, is incapable of modelling the very human behaviours we want our students to emulate, and that AI can be used to cheat. Like any technology, AI also has its disadvantages and limitations. I want to unpack these from three different perspectives, the individual student, faculty, and institutions.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Individual Learner

    For learners whose experience is often orientated around learning management systems, or virtual learning environments, existing learning analytics are being augmented with AI capabilities. Where in the past students might be offered branching scenarios that were preset by learning designers, the addition of AI functionality offers the prospect of algorithms that more deeply analyze a student’s performance and learning approaches, and provide customized content and feedback that is tailored to their individual needs. This is often touted as especially beneficial for students who may have learning disabilities or those who are struggling to keep up with the pace of a traditional classroom, but surely the benefit is universal when realised. We are not quite there yet. Identifying ‘actionable insights’ is possible, the recommended actions harder to define.

    The downside for the individual learner will come from poorly conceived and implemented AI opportunities within institutions. Being told to complete a task by a system, rather than by a tutor, will be received very differently depending on the epistemological framework that you, as a student, operate within. There is a danger that companies presenting solutions that may work for continuing professional development will fail to recognise that a 10 year old has a different relationship with knowledge. As an assistant to faculty, AI is potentially invaluable, as a replacement for tutor direction it will not work for the majority of younger learners within formal learning programmes.

    Digital equity becomes important too. There will undoubtedly be students today, from K-12 through to University, who will be submitting written work generated by ChatGPT. Currently free, for ‘research’ purposes (them researching us), ChatGPT is being raved about across social media platforms for anyone who needs to author content. But for every student that is digitally literate enough to have found their way to the OpenAI platform and can use the tool, there will be others who do not have access to a machine at home, or the bandwidth to make use of the internet, or even to have the internet at all. Merely accessing the tools can be a challenge.

    The third aspect of AI implementation for individuals is around personal digital identity. Everyone, regardless of their age or context, needs to recognise that ‘nothing in life is free’. Whenever you use a free web service you are inevitably being mined for data, which in turn allows the provider of that service to sell your presence on their platform to advertisers. Teaching young people about the two fundamental economic models that operate online, subscription services and surveillance capitalism, MUST be part of ever curriculum. I would argue this needs to be introduced in primary schools and built on in secondary. We know that AI data models require huge datasets to be meaningful, so our data is what fuels these AI processes.

    Faculty

    Undoubtedly faculty will gain through AI algorithms ability to provide real-time feedback and support, to continuously monitor a student’s progress and provide immediate feedback and suggestions for improvement. On a cohort basis this is proving invaluable already, allowing faculty to adjust the pace or focus of content and learning approaches. A skilled faculty member can also, within the time allowed to them, to differentiate their instruction helping students to stay engaged and motivated. Monitoring students’ progress through well structured learning analytics is already available through online platforms.

    What of the in-classroom teaching spaces. One of the sessions at ELAI showcased AI operating in a classroom, interpreting students body language, interactions and even eye tracking. Teachers will tell you that class sizes are a prime determinant of student success. Smaller classes mean that teachers can ‘read the room’ and adjust their approaches accordingly. AI could allow class sizes beyond any claim to be manageable by individual faculty.

    One could imagine a school built with extensive surveillance capability, with every classroom with total audio and visual detection, with physical behaviour algorithms, eye tracking and audio analysis. In that future, the advocates would suggest that the role of the faculty becomes more of a stage manager rather than a subject authority. Critics would argue a classroom without a meaningful human presence is a factory.

    Institutions

    The attraction for institutions of AI is the promise to automate administrative tasks, such as grading assignments and providing progress reports, currently provided by teaching faculty. This in theory frees up those educators to focus on other important tasks, such as providing personalized instruction and support.

    However, one concern touched on at ELAI was the danger of AI reinforcing existing biases and inequalities in education. An AI algorithm is only as good as the data it has been trained on. If that data is biased, its decisions will also be biased. This could lead to unfair treatment of certain students, and could further exacerbate existing disparities in education. AI will work well with homogenous cohorts where the perpetuation of accepted knowledge and approaches is what is expected, less well with diverse cohorts in the context of challenging assumptions.

    This is a problem. In a world in which we need students to be digitally literate and AI literate, to challenge assumptions but also recognise that some sources are verified and others are not, institutions that implement AI based on existing cohorts is likely to restrict the intellectual growth of those that follow.

    Institutions rightly express concerns about the cost of both implementing AI in education and the costs associated with monitoring its use. While the initial investment in AI technologies may be significant, the long-term cost savings and potential benefits may make it worthwhile. No one can be certain how the market will unfurl. It’s possible that many AI applications become incredibly cheap under some model of surveillance capitalism so as to be negligible, even free. However, many of the AI applications, such as ChatGPT, use enormous computing power, little is cacheable and retained for reuse, and these are likely to become costly.

    Institutions wanting to explore the use of AI are likely to find they are being presented with additional, or ‘upgraded’ modules to their existing Enterprise Management Systems or Learning Platforms.

    Conclusion

    It is true that AI has the potential to revolutionize the field of education by providing personalized instruction and support, real-time feedback, and automated administrative tasks. However, institutions need to be wary of the potential for bias, aware of privacy issues and very attentive to the nature of the learning experiences they enable.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Image created using DALL-E

    Source link