Category: Technology

  • Schools are surveilling kids to prevent gun violence or suicide. The lack of privacy comes at a cost

    Schools are surveilling kids to prevent gun violence or suicide. The lack of privacy comes at a cost

    The Education Reporting Collaborative, a coalition of eight newsrooms, is investigating the unintended consequences of AI-powered surveillance at schools. Members of the Collaborative are AL.com, The Associated Press, The Christian Science Monitor, The Dallas Morning News, The Hechinger Report, Idaho Education News, The Post and Courier in South Carolina, and The Seattle Times.

    One student asked a search engine, “Why does my boyfriend hit me?” Another threatened suicide in an email to an unrequited love. A gay teen opened up in an online diary about struggles with homophobic parents, writing they just wanted to be themselves.

    In each case and thousands of others, surveillance software powered by artificial intelligence immediately alerted Vancouver Public Schools staff in Washington state.

    Vancouver and many other districts around the country have turned to technology to monitor school-issued devices 24/7 for any signs of danger as they grapple with a student mental health crisis and the threat of shootings.

    The goal is to keep children safe, but these tools raise serious questions about privacy and security – as proven when Seattle Times and Associated Press reporters inadvertently received access to almost 3,500 sensitive, unredacted student documents through a records request about the district’s surveillance technology.

    The released documents show students use these laptops for more than just schoolwork; they are coping with angst in their personal lives.

    Tim Reiland, 42, center, the parent of daughter Zoe Reiland, 17, right, and Anakin Reiland, 15, photographed in Clinton, Miss., Monday, March 10, 2025, said he had no idea their previous schools, in Oklahoma, were using surveillance technology to monitor the students. (AP Photo/Rogelio V. Solis)

    Students wrote about depression, heartbreak, suicide, addiction, bullying and eating disorders. There are poems, college essays and excerpts from role-play sessions with AI chatbots.

    Vancouver school staff and anyone else with links to the files could read everything. Firewalls or passwords didn’t protect the documents, and student names were not redacted, which cybersecurity experts warned was a massive security risk.

    The monitoring tools often helped counselors reach out to students who might have otherwise struggled in silence. But the Vancouver case is a stark reminder of surveillance technology’s unintended consequences in American schools.

    In some cases, the technology has outed LGBTQ+ children and eroded trust between students and school staff, while failing to keep schools completely safe.

    Gaggle, the company that developed the software that tracks Vancouver schools students’ online activity, believes not monitoring children is like letting them loose on “a digital playground without fences or recess monitors,” CEO and founder Jeff Patterson said.

    Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.

    Roughly 1,500 school districts nationwide use Gaggle’s software to track the online activity of approximately 6 million students. It’s one of many companies, like GoGuardian and Securly, that promise to keep kids safe through AI-assisted web surveillance.

    The technology has been in high demand since the pandemic, when nearly every child received a school-issued tablet or laptop. According to a U.S. Senate investigation, over 7,000 schools or districts used GoGuardian’s surveillance products in 2021.

    Vancouver schools apologized for releasing the documents. Still, the district emphasizes Gaggle is necessary to protect students’ well-being.

    “I don’t think we could ever put a price on protecting students,” said Andy Meyer, principal of Vancouver’s Skyview High School. “Anytime we learn of something like that and we can intervene, we feel that is very positive.”

    Dacia Foster, a parent in the district, commended the efforts to keep students safe but worries about privacy violations.

    “That’s not good at all,” Foster said after learning the district inadvertently released the records. “But what are my options? What do I do? Pull my kid out of school?”

    Foster says she’d be upset if her daughter’s private information was compromised.

    “At the same time,” she said, “I would like to avoid a school shooting or suicide.”

    Related: Ed tech companies promise results, but their claims are often based on shoddy research

    Gaggle uses a machine learning algorithm to scan what students search or write online via a school-issued laptop or tablet 24 hours a day, or whenever they log into their school account on a personal device. The latest contract Vancouver signed, in summer 2024, shows a price of $328,036 for three school years – approximately the cost of employing one extra counselor.

    The algorithm detects potential indicators of problems like bullying, self-harm, suicide or school violence and then sends a screenshot to human reviewers. If Gaggle employees confirm the issue might be serious, the company alerts the school. In cases of imminent danger, Gaggle calls school officials directly. In rare instances where no one answers, Gaggle may contact law enforcement for a welfare check.

    A Vancouver school counselor who requested anonymity out of fear of retaliation said they receive three or four student Gaggle alerts per month. In about half the cases, the district contacts parents immediately.

    “A lot of times, families don’t know. We open that door for that help,” the counselor said. Gaggle is “good for catching suicide and self-harm, but students find a workaround once they know they are getting flagged.”

    Related: Have you had experience with school surveillance tech? Tell us about it

    Seattle Times and AP reporters saw what kind of writing set off Gaggle’s alerts after requesting information about the type of content flagged. Gaggle saved screenshots of activity that set off each alert, and school officials accidentally provided links to them, not realizing they weren’t protected by a password.

    After learning about the records inadvertently released to reporters, Gaggle updated its system. Now, after 72 hours, only those logged into a Gaggle account can view the screenshots. Gaggle said this feature was already in the works but had not yet been rolled out to every customer.

    The company says the links must be accessible without a login during those 72 hours so emergency contacts—who often receive these alerts late at night on their phones—can respond quickly.

    In Vancouver, the monitoring technology flagged more than 1,000 documents for suicide and nearly 800 for threats of violence. While many alerts were serious, many others turned out to be false alarms, like a student essay about the importance of consent or a goofy chat between friends.

    Foster’s daughter Bryn, a Vancouver School of Arts and Academics sophomore, was one such false alarm. She was called into the principal’s office after writing a short story featuring a scene with mildly violent imagery.

    “I’m glad they’re being safe about it, but I also think it can be a bit much,” Bryn said.

    School officials maintain alerts are warranted even in less severe cases or false alarms, ensuring potential issues are addressed promptly.

    “It allows me the opportunity to meet with a student I maybe haven’t met before and build that relationship,” said Chele Pierce, a Skyview High School counselor.

    Related: Students work harder when they think they are being watched

    Between October 2023 and October 2024, nearly 2,200 students, about 10% of the district’s enrollment, were the subject of a Gaggle alert. At the Vancouver School of Arts and Academics, where Bryn is a student, about 1 in 4 students had communications that triggered a Gaggle alert.

    While schools continue to use surveillance technology, its long-term effects on student safety are unclear. There’s no independent research showing it measurably lowers student suicide rates or reduces violence.

    A 2023 RAND study found only “scant evidence” of either benefits or risks from AI surveillance, concluding: “No research to date has comprehensively examined how these programs affect youth suicide prevention.”

    “If you don’t have the right number of mental health counselors, issuing more alerts is not actually going to improve suicide prevention,” said report co-author Benjamin Boudreaux, an AI ethics researcher.

    In the screenshots released by Vancouver schools, at least six students were potentially outed to school officials after writing about being gay, trans or struggling with gender dysphoria.

    LGBTQ+ students are more likely than their peers to suffer from depression and suicidal thoughts, and turn to the internet for support.

    “We know that gay youth, especially those in more isolated environments, absolutely use the internet as a life preserver,” said Katy Pearce, a University of Washington professor who researches technology in authoritarian states.

    In one screenshot, a Vancouver high schooler wrote in a Google survey form they’d been subject to trans slurs and racist bullying. Who created this survey is unclear, but the person behind it had falsely promised confidentiality: “I am not a mandated reporter, please tell me the whole truth.”

    When North Carolina’s Durham Public Schools piloted Gaggle in 2021, surveys showed most staff members found it helpful.

    But community members raised concerns. An LGBTQ+ advocate reported to the Board of Education that a Gaggle alert about self-harm had led to a student being outed to their family, who were not supportive.

    Glenn Thompson, a Durham School of the Arts graduate, poses in front of the school in Durham, N.C., Monday, March 10, 2025. (AP Photo/Karl DeBlaker)

    Glenn Thompson, a Durham School of the Arts graduate, spoke up at a board meeting during his senior year. One of his teachers promised a student confidentiality for an assignment related to mental health. A classmate was then “blindsided” when Gaggle alerted school officials about something private they’d disclosed. Thompson said no one in the class, including the teacher, knew the school was piloting Gaggle.

    “You can’t just (surveil) people and not tell them. That’s a horrible breach of security and trust,” said Thompson, now a college student, in an interview.

    After hearing about these experiences, the Durham Board of Education voted to stop using Gaggle in 2023. The district ultimately decided it was not worth the risk of outing students or eroding relationships with adults.

    Related: School ed tech money mostly gets wasted. One state has a solution

    The debate over privacy and security is complicated, and parents are often unaware it’s even an issue. Pearce, the University of Washington professor, doesn’t remember reading about Securly, the surveillance software Seattle Public Schools uses, when she signed the district’s responsible use form before her son received a school laptop.

    Even when families learn about school surveillance, they may be unable to opt out. Owasso Public Schools in Oklahoma has used Gaggle since 2016 to monitor students outside of class.

    For years, Tim Reiland, the parent of two teenagers, had no idea the district was using Gaggle. He found out only after asking if his daughter could bring her personal laptop to school instead of being forced to use a district one because of privacy concerns.

    The district refused Reiland’s request.

    When his daughter, Zoe, found out about Gaggle, she says she felt so “freaked out” that she stopped Googling anything personal on her Chromebook, even questions about her menstrual period. She didn’t want to get called into the office for “searching up lady parts.”

    “I was too scared to be curious,” she said.

    School officials say they don’t track metrics measuring the technology’s efficacy but believe it has saved lives.

    Yet technology alone doesn’t create a safe space for all students. In 2024, a nonbinary teenager at Owasso High School named Nex Benedict died by suicide after relentless bullying from classmates. A subsequent U.S. Department of Education Office for Civil Rights investigation found the district responded with “deliberate indifference” to some families’ reports of sexual harassment, mainly in the form of homophobic bullying.

    During the 2023-24 school year, the Owasso schools received close to 1,000 Gaggle alerts, including 168 alerts for harassment and 281 for suicide.

    When asked why bullying remained a problem despite surveillance, Russell Thornton, the district’s executive director of technology responded: “This is one tool used by administrators. Obviously, one tool is not going to solve the world’s problems and bullying.”

    Related: Schools prove soft targets for hackers

    Despite the risks, surveillance technology can help teachers intervene before a tragedy.

    A middle school student in the Seattle-area Highline School District who was potentially being trafficked used Gaggle to communicate with campus staff, said former superintendent Susan Enfield.

    “They knew that the staff member was reading what they were writing,” Enfield said. “It was, in essence, that student’s way of asking for help.”

    Still, developmental psychology research shows it is vital for teens to have private spaces online to explore their thoughts and seek support.

    “The idea that kids are constantly under surveillance by adults — I think that would make it hard to develop a private life, a space to make mistakes, a space to go through hard feelings without adults jumping in,” said Boudreaux, the AI ethics researcher.

    Gaggle’s Patterson says school-issued devices are not the appropriate place for unlimited self-exploration. If that exploration takes a dark turn, such as making a threat, “the school’s going to be held liable,” he said. “If you’re looking for that open free expression, it really can’t happen on the school system’s computers.”

    Claire Bryan is an education reporter for The Seattle Times. Sharon Lurye is an education data reporter for The Associated Press.

    Contact Hechinger managing editor Caroline Preston at 212-870-8965, on Signal at CarolineP.83 or via email at [email protected].

    This story about AI-powered surveillance at schools was produced by the Education Reporting Collaborative, a coalition of eight newsrooms that includes AL.com, The Associated Press, The Christian Science Monitor, The Dallas Morning News, The Hechinger Report, Idaho Education News, The Post and Courier in South Carolina, and The Seattle Times.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • How does the higher education sector sustain digital transformation in tough times?

    How does the higher education sector sustain digital transformation in tough times?

    Higher education institutions are in a real bind right now. Financial pressures are bearing down on expenditure, and even those institutions not at immediate risk are having to tighten their belts.

    Yet institutions also need to continue to evolve and improve – to better educate and support students, enable staff to do their teaching and research, strengthen external ties, and remain attractive to international students. The status quo is not appealing – not just because of competitive and strategic pressures but also because for a lot of institutions the existing systems aren’t really delivering a great experience for students and staff. So, when every penny counts, where should institutions invest to get the best outcomes? Technology is rarely the sole answer but it’s usually part of the answer, so deciding which technologies to deploy and how becomes a critical organisational capability.

    Silos breed cynicism

    Digital transformation is one of those areas that’s historically had a bit of a tricky reputation. I suspect your sense of the reason for this depends a bit on your standpoint but my take (as a moderately competent user of technology but by no means expert) is that technology procurement and deployment is an area that tends to expose some of higher education’s historic vulnerabilities around coordinated leadership and decision-making, effective application of knowledge and expertise, and anticipation of, and adaptability to change.

    So in the past there’s been a sense, not of this exact scenario, but some variation on it: the most senior leaders don’t really have the knowledge or expertise about technology and are constantly getting sold on the latest shiny thing; the director of IT makes decisions without fully coordinating with the needs and workflows of the wider organisation; departments buy in tech for their own needs but don’t coordinate with others. There might even be academic or digital pedagogy expertise in the organisation whose knowledge remains untapped in trying to get the system to make sense. And then the whole thing gets tweaked and updated to try to adapt to the changing needs, introducing layer upon layer of complexity and bureaucracy and general clunkiness, and everyone heaves a massive sigh every time a new system gets rolled out.

    This picture is of course a cynical one but it’s striking in our conversations about digital transformation with the sector how frequently these kinds of scenarios are described. The gap between the promise of technology and the reality of making it work is one that can breed quite a lot of cynicism – which is the absolute worst basis from which to embark on any journey of change. People feel as if they are expected to conform to the approved technology, rather than technology helping them do their jobs more effectively.

    Towards digital maturity

    Back in 2023 Jisc bit the bullet with the publication of its digital transformation toolkit, which explicitly sought to replace what in some cases had been a rather fragmented siloed approach with a “whole institution” framework. When Jisc chief executive Heidi Fraser-Krauss speaks at sector events she frequently argues that technology is the easy bit – it’s the culture change that is hard. Over the past two years Jisc director for digital transformation (HE) Sarah Knight and her team have been working with 24 institutions to test the application of the digital transformation framework and maturity model, with a report capturing the learning of what makes digital transformation work in practice published last month.

    I book in a call with Sarah because I’m curious about how institutions are pursuing their digital transformation plans against the backdrop of financial pressure and reductions in expenditure. When every penny counts, institutions need to wring every bit of value from their investments, and technology costs can be a significant part of an institution’s capital and non-staff recurrent expenditure.

    “Digital transformation to us is to show the breadth of where digital touches a university,” says Sarah. “Traditionally digital tended to sit more with ‘digital people’ like CIOs and IT teams, but our framework has shown how a whole-institution approach is needed. For those just starting out, our framework helped to focus attention on the breadth of things to consider such as digital culture, engaging staff and students, digital fluency, capability, inclusivity, sustainability – and all the principles underpinning digital transformation.”

    Advocating a “whole institution approach” may seem counter-intuitive – making what was already a complicated set of decisions even more so by involving more people. But without creating a pipeline of information flow up, down and across the institution, it’s impossible to see what people need from technology, or understand how the various processes in place in different parts of the university are interacting with the technologies available to see where they could be improved.

    “The digital maturity assessment brought people into the conversation at different levels and roles. Doing that can often show up where there is a mismatch in experience and knowledge between organisational leaders and staff and students who are experiencing the digital landscape,” says Sarah.

    Drawing on knowledgeable voices whose experience is closer to the lived reality of teaching and research is key. “Leaders are saying they don’t need to know everything about digital but they do need to support the staff who are working in that space to have resources, and have a seat at table and a voice.”

    Crucially, working across the institution in this way generates an evidence base that can then be used to drive decision-making about the priorities for investment of resources, both money and time. In the past few years, some institutions have been revising their digital strategies and plans, recognising that with constrained finances, they may need to defer some planned investments, or sequence their projects differently, mindful of the pressures on staff.

    For Sarah, leaders who listen, and who assume they don’t already know what’s going on, are those who are the most likely to develop the evidence base that can best inform their decisions:

    “When you have leaders who recognise the value of taking a more evidence-informed approach, that enables investment to be more strategically targeted, so you’re less likely to see cuts falling in areas where digital is a priority. Institutions that have senior leadership support, data informed decision making, and evidence of impact, are in the best place to steer in a direction that is forward moving and find the core areas that are going to enable us to reach longer term strategic goals.”

    In our conversation I detect a sense of a culture shift behind some of the discussions about how to do digital transformation. Put it like this: nobody is saying that higher education leaders of previous decades didn’t practice empathy, careful listening, and value an evidence base. It’s just that when times are tough, these qualities come to the fore as being among the critical tools for institutional success.

    Spirit of collaboration

    There’s a wider culture shift going on in the sector as well, as financial pressures and the sense that a competitive approach is not serving higher education well turns minds towards where the sector could be more collaborative in its approach. Digital is an area that can sometimes be thought of as a competitive space – but arguably that’s mistaking the tech for the impact you hope it will have. Institutions working on digital transformation are better served by learning from others’ experience, and finding opportunities to pool resources and risk, than by going it alone.

    “Digital can be seen as a competitive space, but collaboration outweighs and has far more benefits than competition,” says Sarah. “We can all learn together as a sector, as long as we can keep sharing that spirit of internal and external collaboration we can continue that momentum and be stronger together.”

    This is especially relevant for those institutions whose leaders may secretly feel they are “behind the curve” on digital transformation and experience a sense of anxiety that their institution needs to scramble to “catch up”. The metaphor of the race is less than helpful in this context, creating anxiety rather than a sense of strategic purpose. Sarah believes that no institution can legitimately consider itself “ahead of the curve” – and that all should have the opportunity to learn from each other:

    “We are all on a journey, so some might be ahead in some aspects but definitely not all,” says Sarah. “No-one is behind the curve but everyone is approaching this in a slightly different way, so don’t feel ‘we have to do this ourselves’; use networks and seek help – that is our role as Jisc to support the sector.”

    Jisc is hosting Digifest in Birmingham on 11-12 March – sign up here for online access to sessions.

    Source link

  • How is artificial intelligence actually being used in higher education?

    How is artificial intelligence actually being used in higher education?

    With a wide range of applications, including streamlining administrative tasks and tailoring learning experiences, AI is being used in innovative ways to enhance higher education.

    Course design and content preparation

    AI tools are changing the way academic staff approach course design and content preparation. By leveraging AI, lecturers can quickly generate comprehensive plans, create engaging sessions, and develop quizzes and assignments.

    For instance, tools like Blackboard Ultra can create detailed course plans and provide suggestions for content organisation and course layout. They can produce course materials in a fraction of the time it would traditionally take and suggest interactive elements that could increase student engagement.

    AI tools excel at aligning resources with learning outcomes and institutional policies. This not only saves time but also allows lecturers to focus more on delivering high-quality instruction and engaging with students.

    Enhancing learning experience

    AI and virtual reality (VR) scenarios and gamified environments are offering students unique, engaging learning experiences that go beyond traditional lectures. Tools like Bodyswaps use VR to simulate realistic scenarios for practicing soft and technical skills safely. These immersive and gamified environments enhance learning by engaging students in risk-free real-world challenges and provide instant feedback, helping them learn and adjust more effectively.

    Self-tailored learning

    AI also plays a role in supporting students to tailor learning materials to meet their individual and diverse needs. Tools like Jamworks can enhance student interaction with lecture content by converting recordings into organised notes and interactive study materials, such as flashcards.

    Similarly, Notebook LLM offers flexibility in how students engage with their courses by enabling them to generate content in their preferred form such as briefing documents, podcasts, or taking a more conversational approach. These tools empower students to take control of their learning processes, making education more aligned with their individual learning habits and preferences.

    Feedback and assessment

    Feedback and assessment is the most frequently referenced area when discussing how reductions in workload could be achieved with AI. Marking tools like Graide, Keath.ai, and Learnwise are changing this process by accelerating the marking phase. These tools leverage AI to deliver consistent and tailored feedback, providing students with clear, constructive insights to enhance their academic work. However, the adoption of AI in marking raises valid ethical concerns about its acceptability such as the lack of human judgement and whether AI can mark consistently and fairly.

    Supporting accessibility

    AI can play a crucial role in enhancing accessibility within educational environments, ensuring that learning materials are inclusive and accessible to all students. By integrating AI-driven tools such as automated captioning, and text-to-speech applications, universities can significantly improve the accessibility of digital resources.

    AI’s capability to tailor learning materials is particularly beneficial for students with diverse educational needs. It can reformat text, translate languages, and simplify complex information to make it more digestible. This ensures that all students, regardless of their learning abilities or language proficiency, have equal opportunities to access and understand educational content.

    Despite the benefits, the use of AI tools like Grammarly raises concerns about academic integrity. These tools have the potential to enhance or even alter students’ original work, which may lead to questions about the authenticity of their submissions. This issue highlights the need for clear guidelines and ethical considerations in the use of AI to support academic work without compromising integrity.

    Another significant issue is equity of access to these tools. Many of the most effective AI-driven accessibility tools are premium services, which may not be affordable for all students, potentially widening the digital divide.

    Student support – chatbots

    AI chatbots are increasingly recognised as valuable tools in the tertiary education sector, streamlining student support and significantly reducing staff workload. These increasingly sophisticated systems are adept at managing a wide array of student queries, from routine administrative questions to more detailed academic support, thereby allowing human resources to focus on tasks requiring more nuanced and personal interactions. They can be customised to meet the specific needs of a university, ensuring that they provide accurate and relevant information to students.

    Chatbots such as LearnWise are designed to enhance student interactions by providing more tailored and contextually aware responses. For instance, on a university’s website, if a student expresses interest in gaming, they can suggest relevant courses, highlight the available facilities and include extra curriculum activities available, integrating seamlessly with the student’s interests and academic goals. This level of tailoring enhances the interaction quality and improves the student experience.

    Administrative efficiency

    AI is positively impacting the way administrative tasks are handled within educational institutions, changing the way everyday processes are managed. By automating routine and time-consuming tasks, AI technologies can alleviate the administrative load on staff, allowing them to dedicate more time to strategic and student-focused activities.

    AI tools such as Coplot and Gemini can help staff draft, organise, and prioritise emails. These tools can suggest responses based on the content received, check the tone of emails and manage scheduling by integrating with calendar apps, and remind lecturers of pending tasks or follow-ups, enhancing efficiency within the institution.

    Staff frequently deal with extensive documentation, from student reports to research papers and institutional policies. AI tools can assist in checking, proofreading and summarising papers and reports, and can help with data analysis, generating insights, graphs and graphics to help make data more easily digestible.

    How is AI being used in your institution?

    At Jisc we are collating practical case studies to create a comprehensive overview of how AI is being used across tertiary education. This includes a wide range of examples supporting the effective integration of AI into teaching and administration which will be used to highlight best practice, support those just getting started with the use of AI, overcome challenges being faced across the sector and to highlight the opportunities available to all.

    We want to hear how AI is being used at your organisation, from enhancing everyday tasks to complex and creative use cases. You can explore these resources and find out how to contribute by visiting the Jisc AI Resource Hub.

    For more information around the use of digital and AI in tertiary education, sign up to receive on-demand access to key sessions from Jisc’s flagship teaching and learning event – Digifest running 11–12 March.

    Source link

  • Building tiny chips that can handle enormous data

    Building tiny chips that can handle enormous data

    In the not-so-distant future, really big disasters, such as wildfires in California or floods in Spain or an earthquake in Japan will be monitored and perhaps anticipated by a technology so small it is difficult to even imagine.

    This new technology, called quantum computing, is enabled by nanotechnology — a way of designing technology by manipulating atoms and molecules. Paradoxically, this ultra small technology enables the processing of massively large data sets needed for complex artificial intelligence algorithms.

    There is a growing consensus that AI will quickly change almost everything in the world.

    The AIU cluster, a collection of computer resources used to develop, test and deploy AI models at the IBM Research Center upstate New York. (Credit: Enrique Shore)

    The AI many people already use — such as ChatGPT, Perplexity and now DeepSeek — is based on traditional computers. To process the data analysis needed to answer questions put to these AI programs and to handle the tasks assigned to them takes an enormous amount of energy. For example, the current energy consumption from OpenAI to handle ChatGPT’s prompts in the United States costs some $139.7 million per year.

    Several large private companies, including Google, Microsoft and IBM, are leading the way in this development. The International Business Machines Corp., known as IBM, currently manages the largest industrial research organization, with specialized labs located all over the world.

    Glimpsing the world’s most powerful computers

    The global headquarters of IBM Research are located in the company’s Thomas J. Watson Research Center. Located about one hour north of New York City, it is an impressive building designed in 1961 by Eero Saarinen, an iconic Finnish-American architect who also designed the Dulles International Airport in Washington, D.C., the Swedish Theater in Helsinki and the U.S. Embassy in Oslo.

    A sign at the front door at IBM's research headquarters: “Inventing what’s next”.

    At the entrance of the IBM research headquarters a simple statement sums up what research scientists are trying to achieve at IBM: “Inventing what’s next.”

    At the heart of the IBM Research Center is a “Think Lab” where researchers test AI hardware advancements using the latest and most powerful quantum computers. News Decoder recently toured these facilities.

    There, Shawn Holiday, a product manager at the lab’s Artificial Intelligence Unit (AIU) said the challenge is scaling the size of semiconductors to not only increase performance but also improve power efficiency.

    IBM was the first to develop a new transistor geometry called the gate. Basically, each transistor has multiple channels that are parallel to the surface. Each of those channels has a thickness that is about two nanometers. To try to grasp how small this is consider that one nanometer is about a billionth of a meter.

    This new technology is not just a faster or better version of traditional computers but a totally new way of processing information. It is not based in the traditional bits that are the basis of modern binary computers (meaning bits can be either in the state zero or one) but in qubits, for quantum bits, a different and more complex concept.

    The IBM Quantum System Two

    The IBM Quantum System Two, a powerful quantum computer, operating in the IBM Research Center in Yorktown Heights in upstate New York. (Credit: Enrique Shore)

    A quantum processor with more gates can handle more complex quantum algorithms by allowing for a greater variety of operations to be applied to the qubits within a single computation.

    A new way of processing data

    The change is much more than a new stage in the evolution of computers. Nanotechnology has enabled for the first time in history an entirely new branching in computing history. This new technology is exponentially more advanced; it is not just a faster or better version of traditional computers but a totally new way of processing information.

     

    A replica of the first quantum computer

    A replica of the IBM Quantum System One, the first quantum computer, on display at the IBM Research Center in Yorktown Heights New York. (Credit: Enrique Shore)

    The quantum bit is a basic unit of quantum information that can have many more possibilities, including being in all states simultaneously — a state called superposition — and combining with others, called entanglement, where the state of one qubit is intimately connected with another. This is, of course, a simplified description of a complex process that could hold massively more processing power than traditional computers.

    The current architecture of existing quantum computers require costly, large and complex devices that are refrigerated at extremely low temperatures, close to absolute zero (-459º F, or -273ºC) in order to function correctly. That extremely low temperature is required to change the state of certain materials to conduct electricity with practically zero resistance and no noise.

    Even though there are some prototypes of desktop quantum computers with limited capabilities that could eventually operate at room temperature, they won’t likely replace traditional computers in the foreseeable future, but rather they will operate jointly with them.

    IBM Research is a growing global network of laboratories around the world that are interconnected.

    While IBM is focused on having what they call a hybrid, open and flexible cloud, meaning open-source platforms that can interact with many different systems and vendors, it is also pushing its own technological developments in semiconductor research, an area where its goal is to push the absolute limits of transistor scaling.

    Shrinking down to the quantum realm

    At the lowest level of a chip, you have transistors. You can think of them as switches. Almost like a light switch, they can be off or they can be on. But instead of a mechanical switch, you use a voltage to turn them on and off — when they’re off, they’re at zero and when they’re on, they’re at one.

    A 133-qubit tunable-coupler quantum processor

    IBM Heron, IBM Heron, a 133-qubit tunable-coupler quantum processor (Credit: Enrique Shore)

    This is the basis of all digital computation. What’s driven this industry for the last 60 years is a constant shrinking of the size of transistors to fit more of them on a chip, thereby increasing the processing power of the chip.

    IBM produces wafers in partnership with foundry partners like Samsung and a Japanese startup called Rapidus. Consider that the two-nanometer semiconductor chips which Rapidus is aiming to produce are expected to have up to 45% better performance and use 75% less energy compared to seven-nanometer chips on the market in 2022.

     

    George Tulevski stands next to a world map

    Dr. George Tulevski, IBM Research scientist and manager of the IBM Think Lab, stands next to a world map showing their different labs at the IBM Research Center in Yorktown Heights in New York. (Credit: Enrique Shore)

    IBM predicts that there will be about a trillion transistors on a single die by the early 2030s. To understand that consider that Apple’s M4 chip for its latest iPad Pro has 28 billion transistors. (A die is the square of silicon containing an integrated circuit that has been cut out of the wafer).

    There may be a physical limit to the shrinking of transistors, but if they can no longer be made smaller, they could be stacked in a way that the density per area goes up.

    With each doubling of the trend, there is always a tradeoff of power and performance. Depending on if you tune for power or you tune for performance, with each of these technology nodes, you get either roughly a 50% increase in efficiency or a 50% increase in performance.

    A roadmap for advanced technology

    The bottom line is that doubling the transistor count means being able to do more computations with the same area and the same power.

    Dr. Jay M. Gambetta.

    Dr. Jay M. Gambetta, IBM’s Vice President in charge of IBM’s overall Quantum initiative. explains the expected quantum development roadmap. (Credit: Enrique Shore)

    The roadmap of this acceleration is impressive. Dr. Jay Gambetta, IBM’s vice president in charge of IBM’s overall quantum initiative, showed us a table that forecasts the processing capabilities increasing from the current 5,000 gates to an estimated 100 million gates by 2029, reaching possibly one billion gates by 2033.

    A quantum gate is a basic quantum circuit operating on a small number of qubits. Quantum logic gates are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.

    But that will radically diminish with the new more efficient quantum computers, so the old assumptions that more capacity requires more power is being revised and will be greatly improved in the near future — otherwise this development would not be sustainable.

    A practical example of a current project made possible thanks to quantum computing and AI is Prithvi, a groundbreaking geospatial AI foundation model designed for satellite data by IBM and NASA.

    The model supports tracking changes in land use, monitoring disasters and predicting crop yields worldwide. At 600 million parameters, it’s current version 2.0 introduced in December 2024 is already six times bigger than its predecessor, first released in August 2023.

    It has practical uses like analyzing the recent fires in California, the floods in Spain and the crops in Africa — just a few examples of how Prithvi can help understand complex current issues at a rate that was simply impossible before.

    The impossible isn’t just possible. It is happening now.


     

    Three questions to consider:

    1. How is quantum computing different from traditional computing?
    2. What is the benefit of shrinking the size of a transistor?
    3. If you had access to a supercomputer, what big problem would you want it to solve?


    Source link

  • Here’s where AI, VR and AR are boosting learning in higher ed

    Here’s where AI, VR and AR are boosting learning in higher ed

    Australian universities and TAFEs are embracing and combining emerging technologies like artificial intelligence (AI), virtual reality (VR) and augmented reality (AR).

    These innovations are reshaping the further and higher education sectors, offering more engaging, accessible, data-based learning experiences for students, educators and institutions alike.

    As students and institutions seek value amidst economic and work-life challenges, these technologies are crucial in delivering sustainable and scalable skilling and workforce-development goals. Integrating AI, VR, and AR can provide more personalised and cost-effective learning pathways for students facing daily pressures, making education more accessible and financially viable.

    The transformative role of AI in personalised learning

    AI is becoming a game-changer in Australian education by enabling personalised learning and providing data-driven insights. AI-powered platforms can analyse the complex interplay of factors impacting student performance and customise immersive content delivery to improve persistence, resilience and success.

    This integrated approach can serve personalised springboard content that matches students’ strengths, promotes growth in areas of weakness, and builds both capability and confidence.

    In this way, AI is not just about student learning; it also directly benefits teachers and professional staff. It streamlines the development of educational materials, from video and interactive content to branched lessons and adaptive learning paths.

    A few Australian higher and vocational education institutions have already demonstrated this by exploring the affordances of AI-driven platforms to offer personalised learning programs tailored to students’ career goals and development needs.

    Researchers from the University of South Australia are proving how AI can enhance students’ learning outcomes, equip teachers with advanced education tools and overhaul the education sector for good.

    At the University of Sydney, AI-driven learning platforms offer personalised learning experiences via the university’s generative AI platform, Cogniti, which shows that generative AI is a powerful way to support teachers and their teaching, not supplant them.

    Immersive learning through VR

    Virtual reality also continues to revolutionise Australian further and higher education, providing immersive learning environments that make complex subjects more accessible and engaging.

    From medical schools to engineering programs and advanced manufacturing, VR allows students to engage with practical scenarios that realistically present workplace problems, assess skills application and assess complex tasks.

    VR is a technology with tremendous promise in scaling high-quality and safe immersive learning by doing training at TAFE NSW.

    Its Ultimo campus utilises a high-tech, remarkably lifelike canine mannequin to provide aspiring veterinary nurses with invaluable hands-on training.

    Recently imported from the USA, this highly advanced model enables animal studies and veterinary nursing students to develop essential clinical skills, including intubation, CPR, bandaging and ear cleaning.

    By implementing VR as a training tool, TAFE NSW Ultimo plumbing students can learn to recognise potential risk from return electrical current via copper pipes into a residence, which can cause serious, even fatal, electric shock, in a safe and protected environment.

    Additionally, its welding students were able to identify and solve potentially hazardous scenarios when preparing for welding work.

    AR brings practical training to life

    AR is another immersive technology revolutionising Australian education by deepening the interaction between students and their learning materials. AR overlays digital content in the real world, making abstract concepts more tangible and understandable.

    AR is broadly applicable across diverse fields such as healthcare, technical trades, and construction, allowing students to practice and refine their skills in a controlled, simulated environment.

    At TAFE Queensland, welding students use AR to identify and solve potentially hazardous scenarios when preparing for welding work. 

    With a screen inside the helmet, students position their virtual welding torch, with sparks flying like in real life, against a plastic board and press the torch trigger to see the welds they have made.

    The screen flashes red when they are incorrect and gives them a score at the end. Using AR in welding has reduced raw material wastage by 68 per cent at a time of scarcity.

    TAFE Box Hill Institute’s Advanced Welder Training Centre is equipped with the latest augmented reality simulators, allowing students to use best-practice technology and quality systems in a hands-on environment.

    It was developed in collaboration with Weld Australia, which represents Australian welding professionals, and will help address the current shortage of qualified and skilled welders in Australia.

    Monash University’s Engineering Student Pilot Plant is designed to reflect real-world industrial environments and requirements.

    AR experiences are being developed in Vuforia Studio using 3D CAD models of the pilot plant, enabling visualisation of proposed equipment before installation.

    These AR interfaces will integrate with Internet of Things (IoT) devices, Digital Twin models and process simulations, creating an AR-based Human Machine Interface (HMI) that enhances on-site accessibility by providing remote, simultaneous interaction with the physical equipment and its Digital Twin.

    The future of Australian further and higher education

    The future of further and higher education in Australia will likely see these advanced digital technologies integrated further into the curriculum, offering new opportunities and skills for students to thrive in a competitive, tech-driven environment.

    Australia’s educational institutions have a rich history of effectively using educational technology to further learning and teaching.

    Assessing and leveraging rapidly evolving tools like AR and Gen AI will ensure they remain at the forefront of global education by providing students with the relevant and engaging learning experiences they need to succeed.

    Tony Maguire is regional director of ANZ at global learning technology company D2L.

    Source link

  • 5 of the biggest education trends for 2025

    5 of the biggest education trends for 2025

    Key points:

    As we welcome a new year, educators and industry leaders are excited to discover the biggest education trends for 2025. The past few years have been characterized by fresh and innovative solutions for learning, as well as transformative, technology-forward approaches to education.  

    Each year, we like to look ahead and anticipate the biggest upcoming education trends. There are many topics education professionals can expect to be at the center of the conversation in 2025–from new perspectives on artificial intelligence for education to the emergence of nontraditional school models amid an increasingly competitive enrollment environment. 

    For 2025, schools and districts are focused on making learning more engaging for students, creating a more positive environment for educators, and transforming school culture to meet the diverse needs of the school community. As schools work to accomplish these goals, we expect to see an expansion of AI and other emerging technologies in the classroom, enhanced professional development and support for teachers, and more individualized learning opportunities for students. 

    Here are five of the biggest education trends for 2025: 

    1. Nontraditional school models 

      Everything from career opportunities, technology, and the world around us has changed significantly over the past decade, yet the traditional model of public schools in the U.S. has remained largely unchanged for generations. As this industrial-age school model persists, many students feel bored and disengaged with their learning.  

      When the COVID-19 pandemic caused school interruptions in 2020, many families decided it was time to pivot to new and nontraditional learning opportunities for their children. Since 2019, over 1 million students–the equivalent of one student from every class in the country–have left the conventional classroom to seek out different educational approaches and more innovative learning environments. The National Center for Education Statistics projects that public schools, including public charter schools, will lose an additional 2.4 million students by 2031.  

      Today’s students desire more individualized learning approaches, which empower them to use their creativity, explore their passions, and engage with their peers in more collaborative ways. In 2025, we will see a greater emergence of nontraditional school models that center student engagement, collaboration, and creativity, and prepare learners to graduate into a continually-evolving workforce.  

      Some of these emerging nontraditional education models include microschools, online and hybrid learning programs, and project-based or student-led schools, as well as long-established nontraditional school programs such as homeschooling, Montessori, and career and technical education schools. In 2025, we also anticipate that public schools will step up to meet the diverse needs of students through innovative approaches, mirroring some of the elements of these nontraditional school models in order to maintain enrollment, enhance engagement, and equip students with applicable career-ready skills. 

      2. Expanded use of AI in education 

        As we predicted last year, artificial intelligence (AI) has become prevalent in the educational space, and this emerging technology shows no sign of stopping its rapid growth as we make our way into 2025. This year, we expect the conversation around AI to shift, reflecting a more widespread acceptance of the technology as a beneficial tool to enhance education and productivity. 

        In 2025, schools will continue to integrate more AI into the curriculum, guiding students to use it appropriately to enhance their learning. Many schools and districts have already developed formal AI school policies and modified student codes of conduct to ensure safe, effective, and ethical use of AI tools in the classroom.  

        Furthermore, many educators are now taking the initiative to incorporate AI tools into their lesson plans to help students build familiarity with the technology. Introducing students to AI in a safe and controlled environment enables them to learn how to use it effectively and ethically. Equipping students with foundational skills in AI is already regarded as an essential skill set for college and many careers. 

        Because AI is a fairly new technology for everyone, including educators, we anticipate that more schools will implement AI professional development opportunities this year, enabling teachers to deliver more effective AI instruction. Some schools are also beginning to employ AI tools for administrative productivity, which will require training and guidance to ensure educators and staff can successfully integrate these tools into their work. 

        3. Targeted support for educators  

          Over the past five years, many districts have been focused on allocating Elementary and Secondary School Emergency Relief (ESSER) funding to implement new educational programs and tools, support student wellbeing, and overcome learning loss. Now that the final ESSER deadline has passed, 2025 will see schools and districts shift their attention to providing targeted support directly to educators.  

          With all of the new technology, refreshed learning spaces, and updated curriculum districts have recently introduced, professional development is essential to ensure effective implementation of these enhancements. In 2025, schools will incorporate new professional development programs that empower educators to foster engaged learners. By providing the tools and resources teachers need to be successful, schools can help educators improve their productivity and attain professional goals, while still keeping teacher wellbeing as a top priority. 

          Teachers are the primary influencers of the K-12 educational experience, so supporting educators is a holistic approach that benefits the entire school community. To address rising workloads, schools will implement new tools and strategies to support teacher efficacy and wellbeing. Some schools are even piloting automated and AI-powered technologies to take repetitive and administrative tasks off teachers’ plates, freeing up invaluable time for them to connect with students and focus on teaching.  

          Additionally, districts have begun to recognize the importance of a healthy work-life balance, as many teachers have left the profession over the past several years. In 2025, districts will continue to explore ways to cultivate a more positive job experience for teachers. Teachers want solutions for student behavioral issues, more attentive leadership teams, and more manageable workloads. Schools will work to improve these matters, while maintaining aspects of the job teachers value most, including school culture, opportunities for professional learning and certifications, and STEM and arts programs. 

          4. A focus on school and district culture 

            With a growing list of education options, students and their families are seeking out learning environments that not only provide high-quality curriculum and resources, but also align with their values and prioritize school-home communication. In this increasingly competitive enrollment environment, cultivating a positive culture and connected school community are the qualities that make schools stand out.  

            Funding and resources are directly related to the number of students at each school, so cultivating an inviting school culture is key. In 2025, schools and districts will take time to refine their school brand in order to attract and maintain students. School leaders will focus on creating more opportunities to engage with students and families, implementing new communications tools, initiatives, and events that bring the school community together. 

            In the past few years, some K-12 administrators have piloted mobile teaching stations to increase their visibility and daily impact throughout their school. We anticipate more school leaders will embrace this approach in 2025, enabling them to build stronger relationships with students and teachers. By working from mobile workstations, administrators can directly engage with students and staff, making face-to-face connections on a daily basis. Frequent positive interactions with school leadership help students, teachers, and families stay engaged with the school community, promoting a culture of connection and support. 

            5. Universal design for learning 

              Today’s students are making more choices about how and where they want to learn than ever before. Universal design for learning (UDL) promotes achievement among diverse student bodies by giving each student access to resources and environments that help them learn. Accessibility goes far beyond ADA compliance, and schools are recognizing this through the application of UDL across the learning experience. Understanding the diverse needs of students is crucial for creating learning experiences that are inclusive and supportive. 

              In 2025, UDL will be at the center of creating comfortable and engaging learning environments that accommodate all students’ needs. For instance, more schools are implementing sensory spaces, ensuring neurodiverse learners have a safe and comfortable space to self-regulate throughout the school day. These spaces don’t just serve neurodivergent students–all students benefit from having areas at school that are dedicated to supporting wellbeing. 

              As in previous years, accessibility and equity will continue to be prominent topics in 2025, but the conversation will pivot to focus on ways UDL can positively impact curriculum. UDL emphasizes providing students with multiple, flexible types of engagement, different ways of presenting information, and multiple ways to demonstrate their understanding in the classroom. This practice supports students who are neurodivergent and/or experience learning challenges, but also improves the learning experience for neurotypical students. 

              Latest posts by eSchool Media Contributors (see all)

    Source link

  • Crafting technology-driven IEPs

    Crafting technology-driven IEPs

    Key points:

    Individualized Education Plans (IEP) have been the foundation of special education for decades, and the process in which these documents are written has evolved over the years.

    As technology has evolved, writing documents has also evolved. Before programs existed to streamline the IEP writing process, creating IEPs was once a daunting task of paper and pencil. Not only has the process of writing the IEP evolved, but IEPs are becoming technology-driven.

    Enhancing IEP goal progress with data-driven insights using technology: There are a variety of learning platforms that can monitor a student’s performance in real-time, tailoring to their individual needs and intervening areas for improvement. Data from these programs can be used to create students’ annual IEP goals. This study mentions that the ReadWorks program, used for progress monitoring IEP goals, has 1.2 million teachers and 17 million students using its resources, which provide content, curricular support, and digital tools. ReadWorks is free and provides all its resources free of charge and has both printed and digital versions of the material available to teachers and students (Education Technology Nonprofit, 2021).

    Student engagement and involvement with technology-driven IEPs: Technology-driven IEPs can also empower students to take an active role in their education plan. According to this study, research shows that special education students benefit from educational technology, especially in concept teaching and in practice-feedback type instructional activities (Carter & Center, 2005; Hall, Hughes & Filbert, 2000; Hasselbring & Glaser, 2000). It is vital for students to take ownership in their learning. When students on an IEP reach a certain age, it is important for them to be the active lead in their plan. Digital tools that are used for technology-driven IEPs can provide students with visual representations of their progress, such as dashboards or graphs. When students are given a visual representation of their progress, their engagement and motivation increases.

    Technology-driven IEPs make learning fun: This study discusses technology-enhanced and game based learning for children with special needs. Gamified programs, virtual reality (VR), and augmented reality (AR) change the learning experience from traditional to transformative. Gamified programs are intended to motivate students with rewards, personalized feedback, and competition with leaderboards and challenges to make learning feel like play. Virtual reality gives students an immersive experience that they would otherwise only be able to experience outside of the classroom. It allows for deep engagement and experiential learning via virtual field trips and simulations, without the risk of visiting dangerous places or costly field trip fees that not all districts or students can afford. Augmented reality allows students to visualize abstract concepts such as anatomy or 3D shapes in context. All these technologies align with technology-driven IEPs by providing personalized, accessible, and measurable learning experiences that address diverse needs. These technologies can adapt to a student’s individual skill level, pace, and goals, supporting their IEP.

    Challenges with technology-driven IEPs: Although there are many benefits to
    technology-driven IEPs, it is important to address the potential challenges to ensure equity across school districts. Access to technology in underfunded school districts can be challenging without proper investment in infrastructures, devices, and network connection. Student privacy and data must also be properly addressed. With the use of technologies for technology-driven IEPs, school districts must take into consideration laws such as the Family Educational Rights and Privacy Act (FERPA).

    The integration of technology into the IEP process to create technology-driven IEPs represents a shift from a traditional process to a transformative process. Technology-driven IEPs create more student-centered learning experiences by implementing digital tools, enhancing collaboration, and personalized learning experiences. These learning experiences will enhance student engagement and motivation and allow students to take control of their own learning, making them leaders in their IEP process. However, as technology continues to evolve, it is important to address the equity gap that may arise in underfunded school districts.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Building and Sustaining an AI-informed Institution

    Building and Sustaining an AI-informed Institution

    Title: Navigating Artificial Intelligence in Postsecondary Education: Building Capacity for the Road Ahead

    Source: Office of Educational Technology, U.S. Department of Education

    As a response to the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the Department of Education’s new brief, Navigating Artificial Intelligence in Postsecondary Education, provides recommendations for leaders at higher education institutions. The brief is divided into two main parts: one with policy recommendations and one reviewing literature and research.

    The report outlines five recommendations:

    Develop clear policies for the use of AI in postsecondary settings. The use of AI can be vast, from admissions to enrollment to other decision-making processes. It is important, though, to ensure that AI is not reifying bias. Stakeholders should consider the potential utility of an AI Bill of Rights or the National Institute of Standards and Technology’s AI Risk Management Framework in shaping policies for their campuses. They should also consider affirmative consent and disclosure policies as they relate to AI, as well as inscribing characteristics that make AI trustworthy.

    Generate infrastructure that supports the use of AI in pedagogy, student support, and data tracking. Incentivizing cross-department collaboration and faculty involvement in the development of AI tools is key. It is also important to integrate social and behavioral science research into evaluation of AI.

    Continually assess AI tools. This includes testing equity and accounting for any bias. AI should continuously go through a feedback loop. Institutions need to strategize in ensuring a balance of human supervision. Additionally, evaluations should be comprehensive and from diverse stakeholders.

    Collaborate with partners for the development and testing of AI across different educational uses. Leaders are tasked with finding and building relationships with partners. These partnerships should aim to ensure best practices and promote equitable AI.

    Programs should grow and develop alongside the job market’s increased demand for AI. Leaders must consider how to keep up with the evolving demand for AI, as well as how to integrate across all disciplines.

    Click here for the full report.

    —Kara Seidel


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • Writing notes instead of typing pits scholars against each other

    Writing notes instead of typing pits scholars against each other

    Imagine you’re a student in high school or college. Class is about to start. You are faced with a notable dilemma: Should you whip out a notebook or a laptop to take notes?

    The answer is not so simple. A year ago, paper and pen seemed to be the winner when the journal Frontiers in Psychology published a Norwegian study that documented how different areas of the brain were communicating more frequently when students were writing by hand. When students were typing, the brain was not nearly so active. This extra brain activity, the neuroscientists wrote, is “beneficial for learning.” 

    The study ricocheted around the world. Almost 200 news stories promoted the idea that we remember things better when we write them down by hand instead of typing. It confirmed what many of us instinctively feel. That’s why I still take notes in a notebook even though I can hardly read my chicken scratch.

    Yet earlier this month, the same academic journal published a scathing rebuttal to the handwriting study. A pair of scientists in Spain and France pointed out that none of the Norwegian college students was asked to learn anything in the laboratory experiment. “Drawing conclusions on learning processes in children in a classroom from a lab study carried out on a group of university students that did not include any type of learning seems slippery at best,” the critics wrote.

    The Norwegian study asked 36 college students in their early 20s to write words from the game Pictionary using either a digital pen on a touchscreen or typing on a keyboard. The participants wore stretchy hair nets studded with electrodes to capture their brain activity. The scientists documented the differences between the two modes of writing. 

    Neither mode approximated real life conditions. The students were instructed to write in cursive without lifting the stylus from the screen. And they were only allowed to type with their right index finger.

    The critics also questioned whether elevated brain activity is proof of better learning. Increased brain activity could equally be interpreted as a sign that handwriting is slower and more taxing than typing. We don’t know.

    I contacted Audrey van der Meer, one of the co-authors of the Norwegian study who runs a neuroscience lab at the Norwegian University of Science and Technology in Trondheim. She pointed out that her critics promote the use of keyboards in education, and so they may not be unbiased. But she admitted that her study didn’t test whether students learned anything. 

    Van der Meer is conducting a fresh experiment that involves actual learning with 140 teenagers. She had the high school students watch a recorded lecture. Half of them were randomly assigned to take notes by hand, using a digital pen and touchscreen, and the other half typed their notes. Afterward, they all took the same exam graded by teachers at the school. 

    So far, she’s noticed clear differences in note-taking styles. Those who typed their notes wrote significantly more words, often transcribing parts of the lecture verbatim. They didn’t make any drawings. Those who used a digital pen mainly wrote key words and short sentences and produced two drawings, on average. 

    According to van der Meer, students who use the keyboard are writing down everything the teacher says “because they can.” But, she said in an email, “the information appears to be coming in through the ears and, without any form of processing, going out through the fingertips.” She added that when taking notes by hand, “it is impossible to write down everything, so students have to process the incoming information, summarize it, and link it to knowledge they already have.” That helps the “new information to stick better, resulting in better retention.”

    Van der Meer said she could not yet share the exam results with me as she is still analyzing them. She explained that there are “many confounding variables” that make it difficult to tell if those who used handwritten notes performed better on the exam.

    Even the pro-typing scientists admit that handwriting is important. Previous research has shown that writing letters by hand, compared to typing them, helps young children learn their letters much better. A 2015 study found that adults were better able to recall words in a memory game when they wrote them down by hand first instead of typing them. And a 2010 book chapter documented positive associations between writing words and being able to read them. 

    While there’s fairly compelling evidence that handwriting can help children learn their letters and new words, there’s less proof that handwriting helps us absorb new information and ideas. That’s not to say the Norwegian neuroscientists are wrong. But we still need the proof.

    I’d also add that not all learning is the same. Learning to write is different from learning Spanish vocabulary. There may be times when typing is the ideal way to learn something and other times when handwriting is. Also, learning something involves far more than either typing or handwriting, and the method we use to take notes might ultimately be of small importance compared to how we study our notes afterwards. 

    In the meantime, where did I put my notebook?

    Contact staff writer Jill Barshay at 212-678-3595 or [email protected].

    This story about handwriting versus typing was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up forProof Points and other Hechinger newsletters.

    The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

    Join us today.

    Source link

  • Is freedom of speech the same as freedom to lie?

    Is freedom of speech the same as freedom to lie?

    Meta will stop checking falsehoods. Does that mean more free speech or a free-for-all?

    “First, we’re going to get rid of fact-checkers,” Mark Zuckerberg, the founder of Meta, said in a video statement early this January. “Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

    This statement marks another turn in the company’s policies in handling disinformation and hate speech on their widely used platforms Facebook, Instagram and Threads. 

    Meta built up its moderation capabilities and started its fact-checking program after Russia’s attempts to use Facebook to influence American voters in 2016 and after it was partially blamed by various human rights groups like Amnesty International for allowing the spread of hate speech leading to genocide in Myanmar. 

    Until now, according to Meta, about 15 thousand people review content on the platform in 70 languages to see if it is in line with the company’s community standards.

    Adding information, not deleting

    For other content, the company involves professional fact-checking organizations with journalists around the world. They independently identify and research viral posts that might contain false information. 

    Fact-checkers, like any other journalists, publish their findings in articles. They compare what is claimed in the post with statistics, research findings and expert commentary or they analyze if the media in the post are manipulated or AI generated. 

    But fact-checkers have a privilege that other journalists don’t – they can add information to the posts they find false or out of context on Meta platforms. It appears in the form of a warning label. The user can then read the full article by fact-checkers to see the reasons or close the warning and interact with the post.

    Fact-checkers can’t take any further action like removing or demoting content or accounts, according to Meta. That is up to the company. 

    However, Meta now likens the fact-checking program to censorship. Zuckerberg also argued for the end of the program saying that the fact-checkers “have just been too politically biased and have destroyed more trust than they’ve created.”

    Can untrained people regulate the Web?

    For now, the fact-checking program will be discontinued in the United States. Meta plans to rely instead on regular users to evaluate content under a new program it calls “Community Notes.” The company promises to improve it over the course of the year before expanding it to other countries.

    In a way, Meta walking back on their commitments to fight disinformation wasn’t a surprise, said Carlos Hernández- Echevarría, the associate director of the Spanish fact-checking outlet Maldita and a deputy member of the governance body that assesses and approves European fact-checking organizations before they can work with Meta called the European Fact-Checking Standards Network. 

    Zuckerberg had previously said that the company was unfairly blamed for societal ills and that he was done apologizing. But fact-checking partners weren’t warned ahead of the announcement of the plans to scrap the program, Hernández- Echevarría said.

    It bothers him that Meta connects fact-checking to censorship.

    “It’s actually very frustrating to see the Meta CEO talking about censorship when fact-checkers never had the ability and never wanted the ability to remove any content,” Hernández-Echevarría said. He argues that instead, fact-checkers contribute to speech by adding more information. 

    Are fact-checkers biased?

    Hernández-Echevarría also pushes back against the accusation that fact-checkers are biased. He said that mistakes do occur, but the organizations and people doing the work get carefully vetted and the criteria can be seen in the networks’ Code of Standards

    For example, fact-checkers must publish their methodology for choosing and evaluating information. Fact-checkers also can’t endorse any political parties or have any agreements with them. They also have to provide proof of who they are owned by as well as publicly disclose information about their employees and funding.

    Meta’s own data about Facebook, which they disclose to EU institutions, also shows that erroneous decisions to demote posts based on fact-checking labels occur much less often than when posts are demoted for other reasons — nudity, bullying, hate speech and violence, for example. 

    In the period from April to September last year, Meta received 172,550 complaints about the demotion of posts with fact-checking labels and, after having another look, reversed it for 5,440 posts — a little over 3%. 

    However, in all other categories combined, the demotion had to be reversed for 87% of those posts.

    The sharing of unverified information

    Research shows that the perception of the unequal treatment of different political groups might form because people on the political right publish more unreliable information.

    A paper published in the scientific magazine Nature says that conservative users indeed face penalties more often, but they also share more low-quality news. Researchers therefore argued that even if the policies contain no bias, there can be an asymmetry in how they are enforced on platforms.

    Meta is also making other changes. On 7 January, the company published a revised version of its hateful conduct policies. The platform now allows comparing women to household objects and “insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality”. The revised policies also now permit “allegations of mental illness or abnormality when based on gender or sexual orientation”.

    LGBTQ+ advocacy group GLAAD called these changes alarming and extreme and said they will result in platforms becoming “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation”. 

    Journalists also report that the changes divided the employees of the company. The New York Times reported that as some upset employees posted on the internal message board, human resources workers quickly removed the posts saying they broke the rules of a company policy on community engagement.

    Political pressure

    In a statement published on her social media channels. Angie Drobnic Holan, the director of the International Fact-Checking Network, which represents fact-checkers in the United States, linked Meta’s decision to political pressure.

    “It’s unfortunate that this decision comes in the wake of extreme political pressure from a new administration and its supporters,” Holan said. “Fact-checkers have not been biased in their work. That attack line comes from those who feel they should be able to exaggerate and lie without rebuttal or contradiction.”

    In his book “Save America” published in August 2024, Donald Trump whose term as U.S. President begins today, accused Zuckerberg of plotting against him. “We are watching him closely, and if he does anything illegal this time he will spend the rest of his life in prison,” he wrote. 

    Now, with the changes Zuckerberg announced, Trump is praising Meta and said they’ve come a long way. When asked during a press conference 7 January if he thought Zuckerberg was responding to Trump’s threats, Trump replied, “Probably.”

    After Meta’s announcement, the science magazine Nature published a review of research with comments from experts on the effectiveness of fact-checking. For example, a study in 2019 analyzing 30 research papers covering 20 thousand participants found an influence on beliefs but the effects were weakened by participants’ preexisting beliefs, ideology and knowledge. 

    Sander van der Linden, a social psychologist at the University of Cambridge told Nature that ideally, people wouldn’t form misperceptions in the first place but “if we have to work with the fact that people are already exposed, then reducing it is almost as good as it as it’s going to get”. 

    Hernández-Echevarría said that although the loss of Meta’s funding will be a hard hit to some organizations in the fact-checking community, it won’t end the movement. He said, “They are going to be here, fighting disinformation. No matter what, they will find a way to do it. They will find support. They will do it because their central mission is to fight disinformation.”


    Questions to consider:

    • What is now allowed under Meta’s new rules for posts that wasn’t previously?

    • How is fact-checking not the same as censorship?

    • When you read social media posts, do you care if the poster is telling the truth?


     

    Source link