Tag: Sijen

  • the future of learning design. – Sijen

    the future of learning design. – Sijen

    There is a looming skills deficit across all disciplines currently being taught in Universities today. The vast majority of degree programmes are, at best, gradual evolutions of what has gone before. At their worst they are static bodies of knowledge transmission awaiting a young vibrant new member of faculty to reignite them. Internal reviews are too often perfunctory exercises, seldom challenging the future direction of graduates as long as pass rates are sustained. That is until is to late and failure rates point to a ‘problem’ at a fundamental level around a degree design.

    We, collectively, are at the dawn of a new knowledge-skills-cognition revolution. The future of the professionals has been discussed for some years now. It will be a creeping, quiet, revolution (Susskind and Susskind, 2017). Although we occasionally hear about some fast food business firing all of its front-of-house staff in favour of robotic manufacturing processes and A.I. Ordering services, the reality is that in the majority of contexts the intelligent deployment of A.I. to enhance business operations requires humans to describe how these systems operate with other humans. This is because at present none of these systems score highly on any markers or Emotional Intelligence or EQ.

    Image generaed by Windows Copilot

    Arguably it has become increasingly important to ensure that graduates from any and all disciplines have been educated as to how to describe what they do and why they do it. They need to develop a higher degree of comfort with articulating each thought process and action taken. To do this we desperately need course and programme designers to desist from just describing (and therefore assessing) purely cognitive (intellectual) skills as described by Bloom et.al, and limit themselves to one or two learning outcomes using those formulations. Instead they need to elevate the psychomotor skills in particular, alongside an increasing emphasis on interpersonal ones.

    Anyone who has experimented with prompting any large language model (LLM) will tell you the language used falls squarely under the psychomotor domain. At the lowest levels one might ask to match, copy, imitate, then at mid-levels of skill deployment one might prompt a system to organise, calibrate, compete or show, rating to the highest psychomotor order of skills to ask A.I. systems to define, specify, even imagine. This progressive a type of any taxonomy allows for appropriate calibration of input and output. The ability to use language, to articulate, is an essential skill. There are some instructive (ad entertaining) YouTube videos of parents supporting their children to write instructions (here’s a great example), a skill that is seldom further developed as young people progress into tertiary studies.

    Being able to assess this skill is also challenging. When one was assessing text-based comprehension, even textual analysis, then one could get away with setting an essay question and having a semi-automated process for marking against a rudimentary rubric. Writing instructions, or explanations, of the task carried out, is not the same as verbally describing the same task. Do we imagine that speech recognition technology won’t become an increasingly part of many productive job roles. Not only do courses and programmes need to be designed around a broader range of outcomes, we also need to be continuously revising our assessment opportunities for those outcomes.

    References

    Susskind, R., & Susskind, D. (2017). The Future of the Professions: How Technology Will Transform the Work of Human Experts (Reprint edition). OUP Oxford.

    Source link

  • Designing Effective Intended Learning Outcomes – Sijen

    Designing Effective Intended Learning Outcomes – Sijen

    I am delighted to release a version of the DEILO: Designing Effective Intended Learning Outcomes on the SenseiLMS platform for individuals self-study, self-paced, learning at USD139.00. The course takes between 3 and 10 hours depending on the depth of engagement. You also have the opportunity, entirely optional, to engage with me virtually by submtting draft ILOs for my review and feedback. The course also allows for a certificate (again totally optional) to be triggered on succesfull completion of the course and a final assessement.

    Please note that individual registration requires an individual’s email rather than a shared email. If you want to review the course with a view to programme, departmental or institutional licensing just drop me an email at [email protected]. Course overview is available here.


    Source link

  • On the right track – Sijen

    On the right track – Sijen

    In March 2024, in response to New Governmental mandates that all state schools (publicly funded schools) ban all mobile phones from classrooms and playgrounds during school hours, I wrote a blog piece for the Flexible Learning Association of New Zealand. It was a balanced for-and-against piece, highlights arguments for both perspectives.

    My actual views, my personal views, are somewhat different. I have no insight into the government policy space but it worries me that this is the first stage of what should be a three stage policy implementation when noone has got passed stage one.

    The mobile phone as a means of making or receiving voice calls and phone messages, possibly even SMS text messages, are not likely to be overly  intrusive. However, even this argument doesn’t survive even a cursory glance at recent history. It stands up about as well as Trump’s suggestion that without total immunity all US Presidents would be continuously harangued by their successors, as though he was the first rather than the forty-fifth to hold that office. History tells us that students survived before the advent of the mobile phone. As they had indeed survived before the introduction of the ball-point pen, the ink pen, and the chalk board.

    Stage One: removing social media

    The distinction to be made is not whether students NEED to have access to a mobile phone in order to learn, both knowledge acquisition and associatedcognitive skills, and social interpersonal and affective skills, (spolier, they do not), it is rather a question as to WHETHER mobile phones are an appropriate means of exposing progressive generations of students to emerging technologies andthe power they harness.

    Until mobile phone manufacturers take their responsibility for limiting most egregious damage created by young people’s addiction to social media andintroduce some form of ‘airflight mode’ for schools, ideally accurately GPS mapped and enforceable, the onus will be on school management, teachers and parents to enforce a ban. (Heads up to any of the major handset manufacturers, having a youth-safety mode function is a market share winner.)

    Here in New Zealand, there is strong, though largely still anecdotal, evidence that playgrounds are noisier, more energetic and happier places sincethe ban was introdced, and that in-class attention is more sustained and better managed. There are even suggestions that there is a detectable reduction in cyberbullying.

    Stage Two: infuse technology

    So, on balance taking the mobile phones, as an instrument of constant social distraction rather than as a tool for communication, out of schools makes sense. However, I would personally like to ensure that schools are supported to infuse technology throughout the curriculum. We need to consider what a technology infused school looks like, free of social media distraction. Schools might consider providing tablets for each student to ensure digital equity. Students need to learn how to manage their digital profile, articulate what a digital-twin persona might look like, express themselves digitally as well as learning just to be confident surfers, clickers and users of a wide variety oftools.

    In less economically prosperous areas of the world the mobile phone provides a personal gateway to resources and interactivity and the price we pay, as a society, is the corrosive, addictive behaviour that social media creates. In wealthier areas I believe we can throw away the baby (social media handheld devices) without losing the bathwater (digitally immersive tools).

    Stage Three: lobby handset manufacturers

    Given that there is no incentive for the social media companies to face upto their responsibilities and curtail usage of their apps, they will simply to continue, as the tobacco industry did before them and the food industry does today, to deny and deny, and obscure the worst of their excesses under the banner of ‘user choice’. We need to lobby leading device manufacturers, Apple,Samsung, Google, Sony, Motorola, Huawei, OnePlus, Nokia, Blackberry and LG,to step up and introduce serious zone based protections. And aggressively market them!

    We don’t need social media apps in schools but we do need to enable access to technology in our classrooms.

    Source link

  • Anticipating Impact of Educational Governance – Sijen

    Anticipating Impact of Educational Governance – Sijen

    It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.

    Understanding media representations of “Artificial Intelligence”.

    Mapping types of AI in 2023

    We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.

    This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.

    In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.

    What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.

    Three ethical spheres Ethical spheres for Governing Boards to reflect on in 2023

    Ethical and Moral Implications

    Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.

    Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.

     

    Source link

  • honest authors, being human – Sijen

    honest authors, being human – Sijen

    I briefly had a form up on my website for people to be able to contact me if they wanted to use any of my visualizations, visuals of theory in practice. I had to take it down because ‘people’ proved incapable of reading the text above it which clearly stated its purpose. They insisted on trying to persuade me they had something to flog. Often these individuals, generalists, were most likely using AI to generate blog posts on some vaguely related theme.

    I have rejected hundreds of approaches in recent years from individuals (I assume they were humans) who suggested they could write blogs for me. My site has always been a platform for me to disseminate my academic outputs, reflections, and insights. It has never been about monetizing my outputs or building a huge audience. I recognize that I could be doing a better job of networking, I am consistently attracting a couple of hundred different individuals visiting the site each week, but I am something of a misanthrope so it goes against the grain to crave attention.

    We should differentiate between the spelling and grammar assistance built into many desktop writing applications and the large language models (LLM) that generate original text based on an initial prompt. I have not been able to adjust to the nascent AI applications (Jasper, ChatGPT) in supporting my own authorship. I have used some of these applications as long-text search engine results, but stylistically it just doesn’t work for me. I use the spelling and grammar checking functionality of writing tools but don’t allow it to complete my sentences for me. I regularly use generative AI applications to create illustrative artwork (Midjourney) and always attribute those outputs, just as I would if were to download someone’s work from Unsplash.com or other similar platforms.

    For me, in 2023, the key argument is surely about the human-authenticity equation. To post blogs using more than a spell and grammar checker and not declaring this authorship assistance strikes me as dishonest. It’s simply not your work or your thoughts, you haven’t constructed an argument. I want to know what you, based on your professional experience, have to say about a specific issue. I would like it to be written in flowing prose, but I can forgive the clumsy language used by others and myself. If it’s yours.

    It makes a difference to me knowing that a poem has been born out of 40 years of human experience rather than the product of the undoubtedly clever linguistic manipulation of large language models devoid of human experience. That is not to say that these digital artefacts are not fascinating and have no value. They are truly remarkable, that song generated by AI can be a pleasure to listen to, but not being able to relate the experiences related through song back to an individual simply makes it different. The same is true of artworks and all writing. We need to learn to differentiate between computer intelligence and human intelligence. Where the aim is for ‘augmentation’, such enhancements should be identifiable.

    I want to know that if I am listening, looking, or reading any artefact, it is either generated by, or with assistance from, large generative AI models, or whether it is essentially the output of a human. This blog was created without LLM assistance. I wonder why other authors don’t declare the opposite when it’s true.

    Image credit: Midjourney 14/06/23

    Source link

  • desperately in need of redefinition in the age of generative AI. – Sijen

    desperately in need of redefinition in the age of generative AI. – Sijen

    The vernacular definition of plagiarism is often “passing off someone else’s work as your own” or more fully, in the University of Oxford maternal guidance, “Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement.” This later definition works better in the current climate in which generative AI assistants are being rolled out across many word-processing tools. When a student can start a prompt and have the system, rather than another individual, write paragraphs, there is an urgent need to redefine academic integrity.

    If they are not your own thoughts committed to text, where did they come from? Any thoughts that are not your own need to be attributed. Generative AI applications are already being used in the way that previous generations have made use of Wikipedia, as a source of initial ‘research’, clarification, definitions, and for the more diligent perhaps for sources. In the early days of Wikipedia I saw digitally illiterate students copy and paste wholesale blocks of text from the website straight into their submissions, often with removing hyperlinks! The character of wikipedia as a source has evolved. We need to engage in an open conversation with students, and between ourselves, about the nature of the purpose of any writing task assigned to a student. We need to quickly move students beyond the unreferenced Chatbots into structured and referenced generative AI tools and deploy what we have learnt about Wikipedia. Students need to differentiate between their own thoughts and triangulate everything else before citing and referencing it.

    Image: Midjourney 12/06/23


    Source link

  • Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    It is my privilege to serve alongside Alison Fields as co-editor of the Journal of Open, Flexible and Distance Learning, an international high-quality peer-reviewed academic journal. I also have a piece in this issue entitled ‘Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning‘.

    Issue 26 (2) of the Journal of Open, Flexible and Distance Learning (JOFDL) is now available to the world. It begins with an editorial looking at readership and research trends in the journal post-COVID, followed by a thought-provoking Invited Article about the nature of distance learning by Professor Jon Dron. This general issue follows with 7 articles on different aspects of research after COVID-19.
    Alison Fields and Simon Paul Atkinson, JOFDL Joint Editors. 

    Editorial

    Post-pandemic Trends: Readership and Research After COVID-19

    Alison Fields, Simon Paul Atkinson

    1-6

    Image of Jon Dron

    Invited Article

    Technology, Teaching, and the Many Distances of Distance Learning

    Jon Dron

    7-17

    Position Piece

    Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning

    Simon Paul Atkinson

    18-28

    Articles – Primary studies

    Images of Hulbert and Koh

    The Role of Non-Verbal Communication in Asynchronous Talk Channels ‎

    Image of Leomar Miano

    An An Initial Assessment of Soft Skills Integration in Emergency Remote Learning During the COVID-19 Pandemic: A Learners’ PerspectiveA Learners Perspective

    Image of small child at a laptop

    Supporting English Language Development of English Language Learners in Virtual Kindergarten: A Parents’ Perspective

    Image of Lockias Chitanana

    Parents’ Experience with Remote Learning during COVID-19 Lockdown in Zimbabwe

    Image of Martin Watts & Ioannis Andreadis

    First-year Secondary Students’ Perceptions of the Impact of iPad Use on Their Learning in a BYOD Secondary International School

    venn diagram for AIM

    Teaching, Engaging, and Motivating Learners Online Through Weekly, Tailored, and Relevant CommunicationAcademic Content, Information for the Course, and Motivation (AIM)


    Source link

  • Book on Writing Good Learning Outcomes – Sijen

    Book on Writing Good Learning Outcomes – Sijen

    Introducing a short guide entitled: “Writing Good Learning Outcomes and Objectives”, aimed at enhancing the learner experience through effective course design. Available at https://amazon.com/dp/0473657929

    The book has sections on the function and purpose of intended learning outcomes as well as guidance on how to write them with validation in mind. Sections explore the use of different educational taxonomies as well as some things to avoid, and the importance of context. There is also a section on ensuring your intended learning outcomes are assessable. The final section deals with how you might go about designing an entire course structure based on well-structured outcomes, breaking these outcomes down into session-level objectives that are not going to be assessed.

    #ad #education #highereducation #learningdesign #coursedesign #learningoutcomes #instructionaldesign


    Source link

  • Empower Learners for the Age of AI: a reflection – Sijen

    Empower Learners for the Age of AI: a reflection – Sijen

    During the Empower Learners for the Age of AI (ELAI) conference earlier in December 2022, it became apparent to me personally that not only does Artificial intelligence (AI) have the potential to revolutionize the field of education, but that it already is. But beyond the hype and enthusiasm there are enormous strategic policy decisions to be made, by governments, institutions, faculty and individual students. Some of the ‘end is nigh’ messages circulating on Social Media in the light of the recent release of ChatGPT are fanciful click-bait, some however, fire a warning shot across the bow of complacent educators.

    It is certainly true to say that if your teaching approach is to deliver content knowledge and assess the retention and regurgitation of that same content knowledge then, yes, AI is another nail in that particular coffin. If you are still delivering learning experiences the same way that you did in the 1990s, despite Google Search (b.1998) and Wikipedia (b.2001), I am amazed you are still functioning. What the emerging fascination about AI is delivering an accelerated pace to the self-reflective processes that all university leadership should be undertaking continuously.

    AI advocates argue that by leveraging the power of AI, educators can personalize learning for each student, provide real-time feedback and support, and automate administrative tasks. Critics argue that AI dehumanises the learning process, is incapable of modelling the very human behaviours we want our students to emulate, and that AI can be used to cheat. Like any technology, AI also has its disadvantages and limitations. I want to unpack these from three different perspectives, the individual student, faculty, and institutions.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Individual Learner

    For learners whose experience is often orientated around learning management systems, or virtual learning environments, existing learning analytics are being augmented with AI capabilities. Where in the past students might be offered branching scenarios that were preset by learning designers, the addition of AI functionality offers the prospect of algorithms that more deeply analyze a student’s performance and learning approaches, and provide customized content and feedback that is tailored to their individual needs. This is often touted as especially beneficial for students who may have learning disabilities or those who are struggling to keep up with the pace of a traditional classroom, but surely the benefit is universal when realised. We are not quite there yet. Identifying ‘actionable insights’ is possible, the recommended actions harder to define.

    The downside for the individual learner will come from poorly conceived and implemented AI opportunities within institutions. Being told to complete a task by a system, rather than by a tutor, will be received very differently depending on the epistemological framework that you, as a student, operate within. There is a danger that companies presenting solutions that may work for continuing professional development will fail to recognise that a 10 year old has a different relationship with knowledge. As an assistant to faculty, AI is potentially invaluable, as a replacement for tutor direction it will not work for the majority of younger learners within formal learning programmes.

    Digital equity becomes important too. There will undoubtedly be students today, from K-12 through to University, who will be submitting written work generated by ChatGPT. Currently free, for ‘research’ purposes (them researching us), ChatGPT is being raved about across social media platforms for anyone who needs to author content. But for every student that is digitally literate enough to have found their way to the OpenAI platform and can use the tool, there will be others who do not have access to a machine at home, or the bandwidth to make use of the internet, or even to have the internet at all. Merely accessing the tools can be a challenge.

    The third aspect of AI implementation for individuals is around personal digital identity. Everyone, regardless of their age or context, needs to recognise that ‘nothing in life is free’. Whenever you use a free web service you are inevitably being mined for data, which in turn allows the provider of that service to sell your presence on their platform to advertisers. Teaching young people about the two fundamental economic models that operate online, subscription services and surveillance capitalism, MUST be part of ever curriculum. I would argue this needs to be introduced in primary schools and built on in secondary. We know that AI data models require huge datasets to be meaningful, so our data is what fuels these AI processes.

    Faculty

    Undoubtedly faculty will gain through AI algorithms ability to provide real-time feedback and support, to continuously monitor a student’s progress and provide immediate feedback and suggestions for improvement. On a cohort basis this is proving invaluable already, allowing faculty to adjust the pace or focus of content and learning approaches. A skilled faculty member can also, within the time allowed to them, to differentiate their instruction helping students to stay engaged and motivated. Monitoring students’ progress through well structured learning analytics is already available through online platforms.

    What of the in-classroom teaching spaces. One of the sessions at ELAI showcased AI operating in a classroom, interpreting students body language, interactions and even eye tracking. Teachers will tell you that class sizes are a prime determinant of student success. Smaller classes mean that teachers can ‘read the room’ and adjust their approaches accordingly. AI could allow class sizes beyond any claim to be manageable by individual faculty.

    One could imagine a school built with extensive surveillance capability, with every classroom with total audio and visual detection, with physical behaviour algorithms, eye tracking and audio analysis. In that future, the advocates would suggest that the role of the faculty becomes more of a stage manager rather than a subject authority. Critics would argue a classroom without a meaningful human presence is a factory.

    Institutions

    The attraction for institutions of AI is the promise to automate administrative tasks, such as grading assignments and providing progress reports, currently provided by teaching faculty. This in theory frees up those educators to focus on other important tasks, such as providing personalized instruction and support.

    However, one concern touched on at ELAI was the danger of AI reinforcing existing biases and inequalities in education. An AI algorithm is only as good as the data it has been trained on. If that data is biased, its decisions will also be biased. This could lead to unfair treatment of certain students, and could further exacerbate existing disparities in education. AI will work well with homogenous cohorts where the perpetuation of accepted knowledge and approaches is what is expected, less well with diverse cohorts in the context of challenging assumptions.

    This is a problem. In a world in which we need students to be digitally literate and AI literate, to challenge assumptions but also recognise that some sources are verified and others are not, institutions that implement AI based on existing cohorts is likely to restrict the intellectual growth of those that follow.

    Institutions rightly express concerns about the cost of both implementing AI in education and the costs associated with monitoring its use. While the initial investment in AI technologies may be significant, the long-term cost savings and potential benefits may make it worthwhile. No one can be certain how the market will unfurl. It’s possible that many AI applications become incredibly cheap under some model of surveillance capitalism so as to be negligible, even free. However, many of the AI applications, such as ChatGPT, use enormous computing power, little is cacheable and retained for reuse, and these are likely to become costly.

    Institutions wanting to explore the use of AI are likely to find they are being presented with additional, or ‘upgraded’ modules to their existing Enterprise Management Systems or Learning Platforms.

    Conclusion

    It is true that AI has the potential to revolutionize the field of education by providing personalized instruction and support, real-time feedback, and automated administrative tasks. However, institutions need to be wary of the potential for bias, aware of privacy issues and very attentive to the nature of the learning experiences they enable.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Image created using DALL-E

    Source link