Category: a.I.

  • Anticipating Impact of Educational Governance – Sijen

    Anticipating Impact of Educational Governance – Sijen

    It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.

    Understanding media representations of “Artificial Intelligence”.

    Mapping types of AI in 2023

    We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.

    This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.

    In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.

    What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.

    Three ethical spheres Ethical spheres for Governing Boards to reflect on in 2023

    Ethical and Moral Implications

    Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.

    Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.

     

    Source link

  • honest authors, being human – Sijen

    honest authors, being human – Sijen

    I briefly had a form up on my website for people to be able to contact me if they wanted to use any of my visualizations, visuals of theory in practice. I had to take it down because ‘people’ proved incapable of reading the text above it which clearly stated its purpose. They insisted on trying to persuade me they had something to flog. Often these individuals, generalists, were most likely using AI to generate blog posts on some vaguely related theme.

    I have rejected hundreds of approaches in recent years from individuals (I assume they were humans) who suggested they could write blogs for me. My site has always been a platform for me to disseminate my academic outputs, reflections, and insights. It has never been about monetizing my outputs or building a huge audience. I recognize that I could be doing a better job of networking, I am consistently attracting a couple of hundred different individuals visiting the site each week, but I am something of a misanthrope so it goes against the grain to crave attention.

    We should differentiate between the spelling and grammar assistance built into many desktop writing applications and the large language models (LLM) that generate original text based on an initial prompt. I have not been able to adjust to the nascent AI applications (Jasper, ChatGPT) in supporting my own authorship. I have used some of these applications as long-text search engine results, but stylistically it just doesn’t work for me. I use the spelling and grammar checking functionality of writing tools but don’t allow it to complete my sentences for me. I regularly use generative AI applications to create illustrative artwork (Midjourney) and always attribute those outputs, just as I would if were to download someone’s work from Unsplash.com or other similar platforms.

    For me, in 2023, the key argument is surely about the human-authenticity equation. To post blogs using more than a spell and grammar checker and not declaring this authorship assistance strikes me as dishonest. It’s simply not your work or your thoughts, you haven’t constructed an argument. I want to know what you, based on your professional experience, have to say about a specific issue. I would like it to be written in flowing prose, but I can forgive the clumsy language used by others and myself. If it’s yours.

    It makes a difference to me knowing that a poem has been born out of 40 years of human experience rather than the product of the undoubtedly clever linguistic manipulation of large language models devoid of human experience. That is not to say that these digital artefacts are not fascinating and have no value. They are truly remarkable, that song generated by AI can be a pleasure to listen to, but not being able to relate the experiences related through song back to an individual simply makes it different. The same is true of artworks and all writing. We need to learn to differentiate between computer intelligence and human intelligence. Where the aim is for ‘augmentation’, such enhancements should be identifiable.

    I want to know that if I am listening, looking, or reading any artefact, it is either generated by, or with assistance from, large generative AI models, or whether it is essentially the output of a human. This blog was created without LLM assistance. I wonder why other authors don’t declare the opposite when it’s true.

    Image credit: Midjourney 14/06/23

    Source link

  • desperately in need of redefinition in the age of generative AI. – Sijen

    desperately in need of redefinition in the age of generative AI. – Sijen

    The vernacular definition of plagiarism is often “passing off someone else’s work as your own” or more fully, in the University of Oxford maternal guidance, “Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement.” This later definition works better in the current climate in which generative AI assistants are being rolled out across many word-processing tools. When a student can start a prompt and have the system, rather than another individual, write paragraphs, there is an urgent need to redefine academic integrity.

    If they are not your own thoughts committed to text, where did they come from? Any thoughts that are not your own need to be attributed. Generative AI applications are already being used in the way that previous generations have made use of Wikipedia, as a source of initial ‘research’, clarification, definitions, and for the more diligent perhaps for sources. In the early days of Wikipedia I saw digitally illiterate students copy and paste wholesale blocks of text from the website straight into their submissions, often with removing hyperlinks! The character of wikipedia as a source has evolved. We need to engage in an open conversation with students, and between ourselves, about the nature of the purpose of any writing task assigned to a student. We need to quickly move students beyond the unreferenced Chatbots into structured and referenced generative AI tools and deploy what we have learnt about Wikipedia. Students need to differentiate between their own thoughts and triangulate everything else before citing and referencing it.

    Image: Midjourney 12/06/23


    Source link