Tag: human

  • Combining AI and Human Expertise to Better Protect K-12 Students Online

    Combining AI and Human Expertise to Better Protect K-12 Students Online

    protect-student-online-harmful-cyberbullying

    Content warning – this article discusses suicidal ideation. If you or someone you know is in crisis, call, text or chat 988 to reach the 988 Suicide and Crisis Lifeline, or visit 988lifeline.org for more resources.


    AI was one of the major themes of 2024.The discussion frequently revolved around its impact on work, but there are innovative ways it can be used to complement human insight to address significant societal challenges.

    For example, suicide was the second leading cause of death for people ages 10-14 (2022) according to the Centers for Disease Control and Prevention. This impacts everyone from families to educators. In one small Missouri town, a K-12 Safety Support Specialist was alerted when a student searched, “How much Tylenol does it take to die?” and “What is the best way to kill yourself?” These online searches triggered the school’s student safety tool which uses machine learning to identify harmful content. A specialist was immediately notified and was able to quickly intervene, providing the student with the necessary support to prevent self-harm. 

    There is an urgent need for effective solutions to protect students from threats like suicide, self-harm, cyberbullying, and exposure to harmful content. A combination of machine learning detection to allow for speed and scale, and human review to allow for context and nuance, is required for a comprehensive K-12 student safety tool. This allows schools to act when needed, as guided by their own Safety Plan. According to Talmage Clubbs, Director of Counseling for Neosho District in Missouri, “Our students know about it [student safety K-12 tool]. We have students purposely typing in keywords so they can be pulled in and talked to about their suicidality, their mental health issues, anything like that because they are struggling, and they just don’t know how else to reach anybody.”  

    Another example where human intervention is essential is when a machine learning-powered solution flags anatomical text as explicit content, but this might be for legitimate science coursework. Human reviewers can verify educational intent by examining context like student age and subject. 

    In the 2022-2023 school year, 94% of public schools report providing digital devices, such as laptops or tablets, to students according to the National Center for Education Statistics. This is a 28% growth from the number of devices provided pre-pandemic in middle schools and a 52% growth for elementary school students. As students spend more time online for school, they also use these devices for extracurricular learning and making social connections. However, they also have easier access to inappropriate content online. The challenges of ensuring online safety have become increasingly complex, as more students may seek harmful information or engage in distressing or inappropriate behaviors.

    To truly support all students — regardless of their socioeconomic background or technological literacy — in the digital age, solutions must be user-friendly and adaptable to the diverse needs of schools and districts. By collaborating — educators, technology providers like GoGuardian, and policymakers can create a future where AI enhances educational experiences for students, fosters healthy human connection and empathy, and ensures privacy.

    This also supports educators in today’s digital world who require innovative safety and security solutions to enable students to thrive physically, mentally, and academically while ensuring their well-being and academic progress. “You can rest well at night, knowing you are changing districts and saving lives,” says Dr. Jim Cummins, Superintendent of Neosho District.


    To learn more, visit GoGuardian.com


    Source link

  • honest authors, being human – Sijen

    honest authors, being human – Sijen

    I briefly had a form up on my website for people to be able to contact me if they wanted to use any of my visualizations, visuals of theory in practice. I had to take it down because ‘people’ proved incapable of reading the text above it which clearly stated its purpose. They insisted on trying to persuade me they had something to flog. Often these individuals, generalists, were most likely using AI to generate blog posts on some vaguely related theme.

    I have rejected hundreds of approaches in recent years from individuals (I assume they were humans) who suggested they could write blogs for me. My site has always been a platform for me to disseminate my academic outputs, reflections, and insights. It has never been about monetizing my outputs or building a huge audience. I recognize that I could be doing a better job of networking, I am consistently attracting a couple of hundred different individuals visiting the site each week, but I am something of a misanthrope so it goes against the grain to crave attention.

    We should differentiate between the spelling and grammar assistance built into many desktop writing applications and the large language models (LLM) that generate original text based on an initial prompt. I have not been able to adjust to the nascent AI applications (Jasper, ChatGPT) in supporting my own authorship. I have used some of these applications as long-text search engine results, but stylistically it just doesn’t work for me. I use the spelling and grammar checking functionality of writing tools but don’t allow it to complete my sentences for me. I regularly use generative AI applications to create illustrative artwork (Midjourney) and always attribute those outputs, just as I would if were to download someone’s work from Unsplash.com or other similar platforms.

    For me, in 2023, the key argument is surely about the human-authenticity equation. To post blogs using more than a spell and grammar checker and not declaring this authorship assistance strikes me as dishonest. It’s simply not your work or your thoughts, you haven’t constructed an argument. I want to know what you, based on your professional experience, have to say about a specific issue. I would like it to be written in flowing prose, but I can forgive the clumsy language used by others and myself. If it’s yours.

    It makes a difference to me knowing that a poem has been born out of 40 years of human experience rather than the product of the undoubtedly clever linguistic manipulation of large language models devoid of human experience. That is not to say that these digital artefacts are not fascinating and have no value. They are truly remarkable, that song generated by AI can be a pleasure to listen to, but not being able to relate the experiences related through song back to an individual simply makes it different. The same is true of artworks and all writing. We need to learn to differentiate between computer intelligence and human intelligence. Where the aim is for ‘augmentation’, such enhancements should be identifiable.

    I want to know that if I am listening, looking, or reading any artefact, it is either generated by, or with assistance from, large generative AI models, or whether it is essentially the output of a human. This blog was created without LLM assistance. I wonder why other authors don’t declare the opposite when it’s true.

    Image credit: Midjourney 14/06/23

    Source link