Tag: WakeUp

  • Later Wake-Up Call for Inside Higher Ed’s Daily News Update

    Later Wake-Up Call for Inside Higher Ed’s Daily News Update

    Loyal Inside Higher Ed readers who wake up to our daily newsletter will soon have an easier time finding each day’s edition in their crowded inboxes. 

    Starting Tuesday, Sept. 2, the Daily News Update will arrive between 5:30 and 6:00 a.m. Eastern, several hours later than the current 3:15 a.m. This may upset the morning routines of the handful of souls on the East Coast who rise before the sun, but for most readers, we hope this change means our newsletter is there at the top of your inbox when you log in, ready to inform your day.  

    Thank you for waking up with Inside Higher Ed

    Source link

  • Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Data, privacy, and cybersecurity in schools: A 2025 wake-up call

    Key points:

    In 2025, schools are sitting on more data than ever before. Student records, attendance, health information, behavioral logs, and digital footprints generated by edtech tools have turned K-12 institutions into data-rich environments. As artificial intelligence becomes a central part of the learning experience, these data streams are being processed in increasingly complex ways. But with this complexity comes a critical question: Are schools doing enough to protect that data?

    The answer, in many cases, is no.

    The rise of shadow AI

    According to CoSN’s May 2025 State of EdTech District Leadership report, a significant portion of districts, specifically 43 percent, lack formal policies or guidance for AI use. While 80 percent of districts have generative AI initiatives underway, this policy gap is a major concern. At the same time, Common Sense Media’s Teens, Trust and Technology in the Age of AI highlights that many teens have been misled by fake content and struggle to discern truth from misinformation, underscoring the broad adoption and potential risks of generative AI.

    This lack of visibility and control has led to the rise of what many experts call “shadow AI”: unapproved apps and browser extensions that process student inputs, store them indefinitely, or reuse them to train commercial models. These tools are often free, widely adopted, and nearly invisible to IT teams. Shadow AI expands the district’s digital footprint in ways that often escape policy enforcement, opening the door to data leakage and compliance violations. CoSN’s 2025 report specifically notes that “free tools that are downloaded in an ad hoc manner put district data at risk.”

    Data protection: The first pillar under pressure

    The U.S. Department of Education’s AI Toolkit for Schools urges districts to treat student data with the same care as medical or financial records. However, many AI tools used in classrooms today are not inherently FERPA-compliant and do not always disclose where or how student data is stored. Teachers experimenting with AI-generated lesson plans or feedback may unknowingly input student work into platforms that retain or share that data. In the absence of vendor transparency, there is no way to verify how long data is stored, whether it is shared with third parties, or how it might be reused. FERPA requires that if third-party vendors handle student data on behalf of the institution, they must comply with FERPA. This includes ensuring data is not used for unintended purposes or retained for AI training.

    Some tools, marketed as “free classroom assistants,” require login credentials tied to student emails or learning platforms. This creates additional risks if authentication mechanisms are not protected or monitored. Even widely-used generative tools may include language in their privacy policies allowing them to use uploaded content for system training or performance optimization.

     

    Data processing and the consent gap

    Generative AI models are trained on large datasets, and many free tools continue learning from user prompts. If a student pastes an essay or a teacher includes student identifiers in a prompt, that information could enter a commercial model’s training loop. This creates a scenario where data is being processed without explicit consent, potentially in violation of COPPA (Children’s Online Privacy Protection Act) and FERPA. While the FTC’s December 2023 update to the COPPA Rule did not codify school consent provisions, existing guidance still allows schools to consent to technology use on behalf of parents in educational contexts. However, the onus remains on schools to understand and manage these consent implications, especially with the rule’s new amendments becoming effective June 21, 2025, which strengthen protections and require separate parental consent for third-party disclosures for targeted advertising.

    Moreover, many educators and students are unaware of what constitutes “personally identifiable information” (PII) in these contexts. A name combined with a school ID number, disability status, or even a writing sample could easily identify a student, especially in small districts. Without proper training, well-intentioned AI use can cross legal lines unknowingly.

    Cybersecurity risks multiply

    AI tools have also increased the attack surface of K-12 networks. According to ThreatDown’s 2024 State of Ransomware in Education report, ransomware attacks on K-12 schools increased by 92 percent between 2022 and 2023, with 98 total attacks in 2023. This trend is projected to continue as cybercriminals use AI to create more targeted phishing campaigns and detect system vulnerabilities faster. AI-assisted attacks can mimic human language and tone, making them harder to detect. Some attackers now use large language models to craft personalized emails that appear to come from school administrators.

    Many schools lack endpoint protection for student devices, and third-party integrations often bypass internal firewalls. Free AI browser extensions may collect keystrokes or enable unauthorized access to browser sessions. The more tools that are introduced without IT oversight, the harder it becomes to isolate and contain incidents when they occur. CoSN’s 2025 report indicates that 60 percent of edtech leaders are “very concerned about AI-enabled cyberattacks,” yet 61 percent still rely on general funds for cybersecurity efforts, not dedicated funding.

    Building a responsible framework

    To mitigate these risks, school leaders need to:

    • Audit tool usage using platforms like Lightspeed Digital Insight to identify AI tools being accessed without approval. Districts should maintain a living inventory of all digital tools. Lightspeed Digital Insight, for example, is vetted by 1EdTech for data privacy.
    • Develop and publish AI use policies that clarify acceptable practices, define data handling expectations, and outline consequences for misuse. Policies should distinguish between tools approved for instructional use and those requiring further evaluation.
    • Train educators and students to understand how AI tools collect and process data, how to interpret AI outputs critically, and how to avoid inputting sensitive information. AI literacy should be embedded in digital citizenship curricula, with resources available from organizations like Common Sense Media and aiEDU.
    • Vet all third-party apps through standards like the 1EdTech TrustEd Apps program. Contracts should specify data deletion timelines and limit secondary data use. The TrustEd Apps program has vetted over 12,000 products, providing a valuable resource for districts.
    • Simulate phishing attacks and test breach response protocols regularly. Cybersecurity training should be required for staff, and recovery plans must be reviewed annually.

    Trust starts with transparency

    In the rush to embrace AI, schools must not lose sight of their responsibility to protect students’ data and privacy. Transparency with parents, clarity for educators, and secure digital infrastructure are not optional. They are the baseline for trust in the age of algorithmic learning.

    AI can support personalized learning, but only if we put safety and privacy first. The time to act is now. Districts that move early to build policies, offer training, and coordinate oversight will be better prepared to lead AI adoption with confidence and care.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • A Viral Wake-Up Call—Or CCP Propaganda?

    A Viral Wake-Up Call—Or CCP Propaganda?

    In a clip that’s rapidly gone viral among both left-leaning critics of neoliberalism and right-wing populists, a young Chinese TikTok influencer delivers a searing indictment of American economic decline. Fluent in English and confident in tone, the speaker lays bare what many struggling Americans already feel: that they’ve been conned by their own elites.

    “They robbed you blind and you thank them for it. That’s a tragedy. That’s a scam,” the young man declares, addressing the American people directly.

    The video, played and discussed on Judging Freedom with Judge Andrew Napolitano and Professor John Mearsheimer, has sparked praise—and suspicion. While the message resonates with a growing number of Americans disillusioned by the bipartisan political establishment, some are asking: Who is behind this message?

     
    A Sharp Critique of American Oligarchy

    In his 90-second monologue, the influencer claims U.S. oligarchs offshored manufacturing to China for profit—not diplomacy—gutting the middle class, crashing the working class, and leaving Americans with stagnating wages, unaffordable healthcare, mass addiction, and what he calls “flag-waving poverty made in China.” Meanwhile, he says, China reinvested its profits into its people, raising living standards and building infrastructure.

    “What did your oligarchs do? They bought yachts, private jets, and mansions… You get stagnated wages, crippling healthcare costs, cheap dopamine, debt, and flag-waving poverty made in China.”

    He ends with a provocative call: “You don’t need another tariff. You need to wake up… You need a revolution.”

    It’s a blistering populist critique—and one that finds unexpected agreement from Mearsheimer, who said on the show, “I basically agree with him. I think he’s correct.”
    A Message That Cuts Across Party Lines

    The critique echoes themes found in Donald Trump’s early campaign rhetoric, as well as long-standing leftist arguments about neoliberal betrayal, corporate offshoring, and elite impunity. It’s the kind of message that unites the American underclass in its many forms—service workers, laid-off factory employees, disillusioned veterans, and student debtors alike.

    Mearsheimer went on to argue that the U.S. national security establishment itself was compromised—that its consultants and former officials had deep financial ties to China, making them unwilling to confront the geopolitical risks of China’s rise. According to him, elites were more invested in their own gain than in the national interest.

    But that raises an even more complicated question.

     
    Is This an Authentic Voice—or a CCP Production?

    The most provocative—and potentially overlooked—aspect of this story is the medium itself: TikTok, which is owned by ByteDance, a company under heavy scrutiny for its ties to the Chinese Communist Party (CCP). Could this slick, emotionally resonant video be part of a broader soft-power campaign?

    The Chinese government has invested heavily in media operations that shape global narratives. While the content of the message may be factually accurate or emotionally true for many Americans, it’s not hard to imagine the CCP welcoming—if not engineering—videos that sow further division and distrust within the United States.

    The video’s flawless production, powerful rhetoric, and clever framing—presenting China as the responsible partner and the U.S. as self-destructive—align closely with Beijing’s global messaging. Add to this the timing, with U.S.-China tensions running high over tariffs, Taiwan, and global power shifts, and the question becomes unavoidable:

    Is this sincere grassroots criticism… or a polished psychological operation?

    The answer may be both. It’s entirely possible that the young man believes everything he’s saying. But it’s also likely that content like this is algorithmically favored—or even quietly encouraged—by a platform closely tied to a government with every incentive to highlight American decline.
    Weaponized Truth?

    This is not a new tactic. During the Cold War, both the U.S. and the USSR employed truth-tellers and defectors to criticize their adversaries. But in today’s digital landscape, the boundaries between propaganda, whistleblowing, and legitimate dissent are more porous than ever.

    The Higher Education Inquirer has reported extensively on how American elites—across both political parties—have betrayed working people, including within the halls of higher education. That doesn’t mean we should ignore where a message comes from, or what strategic purpose it might serve.

    The danger is not just foreign interference. The greater danger may be that such foreign-origin messages ring so true for so many Americans.
    A Closing Thought: Listen Carefully, Then Ask Why

    The influencer says:

    “You let the oligarchs feed your lies while they made you fat, poor, and addicted… I don’t think you need another tariff. You need to wake up.”

    He’s not wrong to say Americans have been exploited. But if the message is being boosted by a rival authoritarian state, it’s worth asking why.

    America’s problems are real. Its discontent is justified. But as in all revolutions, the question is not only what we’re overthrowing—but what might take its place.

    Sources:

    Judging Freedom – Judge Andrew Napolitano and Professor John Mearsheimer

    TikTok (ByteDance) ownership and CCP ties – Reuters, The New York Times, Wall Street Journal

    The Higher Education Inquirer archives on student debt, adjunct labor, and corporate-academic complicity

    Pew Research Center – Views of China, U.S. Public Opinion

    Congressional hearings on TikTok and national security, 2023–2024

    Source link