Category: digital literacy

  • Teaching students to use AI: from digital competence to a learning outcome

    Teaching students to use AI: from digital competence to a learning outcome

    by Concepción González García and Nina Pallarés Cerdà

    Debates about generative AI in higher education often start from the same assumption: students need a certain level of digital competence before they can use AI productively. Those who already know how to search, filter and evaluate online information are seen as the ones most likely to benefit from tools such as ChatGPT, while others risk being left further behind.

    Recent studies reinforce this view. Students with stronger digital skills in areas like problem‑solving and digital ethics tend to use generative AI more frequently (Caner‑Yıldırım, 2025). In parallel, work using frameworks such as DigComp has mostly focused on measuring gaps in students’ digital skills – often showing that perceived “digital natives” are less uniformly proficient than we might think (Lucas et al, 2022). What we know much less about is the reverse relationship: can carefully designed uses of AI actually develop students’ digital competences – and for whom?

    In a recent article, we addressed this question empirically by analysing the impact of a generative AI intervention on university students’ digital competences (García & Pallarés, 2026). Students’ skills were assessed using the European DigComp 2.2 framework (Vuorikari et al, 2022).

    Moving beyond static measures of digital competence

    Research on students’ digital competences in higher education has expanded rapidly over the past decade. Yet much of this work still treats digital competence as a stable attribute that students bring with them into university, rather than as a dynamic and educable capability that can be shaped through instructional design. The consequence is a field dominated by one-off assessments, surveys and diagnostic tools that map students’ existing skills but tell us little about how those skills develop.

    This predominant focus on measurement rather than development has produced a conceptual blind spot: we know far more about how digital competences predict students’ use of emerging technologies than about how educational uses of these technologies might enhance those competences in the first place.

    Recent studies reinforce this asymmetry. Students with higher levels of digital competence are more likely to engage with generative AI tools and to display positive attitudes towards their use (Moravec et al, 2024; Saklaki & Gardikiotis, 2024). In this ‘competence-first’ model, digital competence appears as a precondition for productive engagement with AI. Yet this framing obscures a crucial pedagogical question: might AI, when intentionally embedded in learning activities, actually support the growth of the very competences it is presumed to require?

    A second limitation compounds this problem: the absence of a standardised framework for analysing and comparing the effects of AI-based interventions on digital competence development. Although DigComp is widely used for diagnostic purposes, few studies employ it systematically to evaluate learning gains or to map changes across specific competence areas. As a result, evidence from different interventions remains fragmented, making it difficult to identify which aspects of digital competence are most responsive to AI-mediated learning.

    There is, nevertheless, emerging evidence that AI can do more than simply ‘consume’ digital competence. Studies by Dalgıç et al (2024) and Naamati-Schneider & Alt (2024) suggest that integrating tools such as ChatGPT into structured learning tasks can stimulate information search, analytical reasoning and critical evaluation—provided that students are guided to question and verify AI outputs rather than accept them uncritically. Yet these contributions remain exploratory. We still lack experimental or quasi-experimental evidence that links AI-based instructional designs to measurable improvements in specific DigComp areas, and we know little about whether such benefits accrue equally to all students or disproportionately to those who already possess stronger digital skills.

    This gap matters. If digital competences are conceived as malleable rather than fixed, then AI is not merely a technology that demands certain skills but a pedagogical tool through which those skills can be cultivated. This reframing shifts the centre of the debate: away from asking whether students are ready for AI, and towards asking whether our teaching practices are ready to use AI in ways that promote competence development and reduce inequalities in learning.

    Our study: teaching students to work with AI, not around it

    We designed a randomised controlled trial with 169 undergraduate students enrolled in a Microeconomics course. Students were allocated by class group to either a treatment or a control condition. All students followed the same curriculum and completed the same online quizzes through the institutional virtual campus.

    The crucial difference lay in how generative AI was integrated:

    • In the treatment condition, students received an initial workshop on using large language models strategically. They practised:
    • contextualising questions
    • breaking problems into steps
    • iteratively refining prompts
    • and checking their own solutions before turning to the AI.
    • Throughout the course, their online self-assessments included adaptive feedback: instead of simply marking answers as right or wrong, the system offered hints, step-by-step prompts and suggestions on how to use AI tools as a thinking partner.
    • In the control condition, students completed the same quizzes with standard right/wrong feedback, and no training or guidance on AI.

    Importantly, the intervention did not encourage students to outsource solutions to AI. Rather, it framed AI as an interactive study partner to support self-explanation, comparison of strategies and self-regulation in problem solving.

    We administered pre- and post-course questionnaires aligned with DigComp 2.2, focusing on five competences: information and data literacy, communication and collaboration, safety, and two aspects of problem solving (functional use of digital tools and metacognitive self-regulation). Using a difference-in-differences model with individual fixed effects, we estimated how the probability of reporting the highest level of each competence changed over time for the treatment group relative to the control group.

    What changed when AI was taught and used in this way?

    At the overall sample level, we found statistically significant improvements in three areas:

    • Information and data literacy – students in the AI-training condition were around 15 percentage points more likely to report the highest level of competence in identifying information needs and carrying out effective digital searches.
    • Problem solving – functional dimension – the probability of reporting the top level in using digital tools (including AI) to solve tasks increased by about 24 percentage points.
    • Problem solving – metacognitive dimension – a similar 24-point gain emerged for recognising what aspects of one’s digital competences need to be updated or improved.

    In other words, the AI-integrated teaching design was associated not only with better use of digital tools, but also with stronger awareness of digital strengths and weaknesses – a key ingredient of autonomous learning. Communication and safety competences also showed positive but smaller and more uncertain effects. Here, the pattern becomes clearer when we look at who benefited most.

    A compensatory effect: AI as a potential leveller, not just an amplifier

    When we distinguished students by their initial level of digital competence, a pattern emerged. For those starting below the median, the intervention produced large and significant gains in all five competences, with improvements between 18 and 38 percentage points depending on the area. For students starting above the median, effects were smaller and, in some cases, non-significant.

    This suggests a compensatory effect: students who began the course with weaker digital competences benefited the most from the AI-based teaching design. Rather than widening the digital gap, guided use of AI acted as a levelling mechanism, bringing lower-competence students closer to their more digitally confident peers.

    Conceptually, this challenges an implicit assumption in much of the literature – namely, that generative AI will primarily enhance the learning of already advantaged students, because they are the ones with the skills and confidence to exploit it. Our findings show that, when AI is embedded within intentional pedagogy, explicit training and structured feedback, the opposite can happen: those who started with fewer resources can gain the most.

    From ‘allow or ban’ to ‘how do we teach with AI?’

    For higher education policy and practice, the implications are twofold.

    First, we need to stop thinking of digital competence purely as a prerequisite for using AI. Under the right design conditions, AI can be a pedagogical resource to build those competences, especially in information literacy, problem solving and metacognitive self-regulation. That means integrating AI into curricula not as an add-on, but as part of how we teach students to plan, monitor and evaluate their learning.

    Second, our results suggest that universities concerned with equity and digital inclusion should focus less on whether students have access to AI tools (many already do) and more on who receives support to learn how to use them well. Providing structured opportunities to practise prompting, to critique AI outputs and to reflect on one’s own digital skills may be particularly valuable for students who enter university with lower levels of digital confidence.

    This does not resolve all the ethical and practical concerns around generative AI – far from it. But it shifts the conversation. Instead of treating AI as an external threat to academic integrity that must be tightly controlled, we can start to ask:

    • How can we design tasks where the added value lies in asking good questions, justifying decisions and evaluating evidence, rather than in producing a single ‘correct’ answer?
    • How can we support students to see AI not as a shortcut to avoid thinking, but as a tool to think better and know themselves better as learners?
    • Under what conditions does AI genuinely help to close digital competence gaps, and when might it risk opening new ones?

    Answering these questions will require further longitudinal and multi-institutional research, including replication studies and objective performance measures alongside self-reports. Yet the evidence we present offers a cautiously optimistic message: teaching students how to use AI can be part of a strategy to strengthen digital competences and reduce inequalities in higher education, rather than merely another driver of stratification.

    Concepción González García is Assistant Professor of Economics at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain, and holds a PhD in Economics from the University of Alicante. Her research interests include macroeconomics, particularly fiscal policy, and education.

    Nina Pallarés is Assistant Professor of Economics and Academic Coordinator of the Master’s in Management of Sports Entities at the Faculty of Economics and Business, Catholic University of Murcia (UCAM), Spain. Her research focuses on applied econometrics, with particular emphasis on health, labour, education, and family economics.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Why critical data literacy belongs in every K–12 classroom

    Why critical data literacy belongs in every K–12 classroom

    Key points:

    An unexpected group of presenters–11th graders from Whitney M. Young Magnet High School in Chicago–made a splash at this year’s ACM Conference on Fairness, Accountability, and Transparency (FAccT). These students captivated seasoned researchers and professionals with their insights on how school environments shape students’ views of AI. “I wanted our project to serve as a window into the eyes of high school students,” said Autumn Moon, one of the student researchers.

    What enabled these students to contribute meaningfully to a conference dominated by PhDs and industry veterans was their critical data literacy–the ability to understand, question, and evaluate the ethics of complex systems like AI using data. They developed these skills through their school’s Data is Power program.

    Launched last year, Data is Power is a collaboration among K-12 educators, AI ethics researchers, and the Young Data Scientists League. The program includes four pilot modules that are aligned to K-12 standards and cover underexplored but essential topics in AI ethics, including labor and environmental impacts. The goal is to teach AI ethics by focusing on community-relevant topics chosen by our educators with input from students, all while fostering critical data literacy. For example, Autumn’s class in Chicago used AI ethics as a lens to help students distinguish between evidence-based research and AI propaganda. Students in Phoenix explored how conversational AI affects different neighborhoods in their city.

    Why does the Data is Power program focus on critical data literacy? In my former role leading a diverse AI team at Amazon, I saw that technical skills alone weren’t enough. We needed people who could navigate cultural nuance, question assumptions, and collaborate across disciplines. Some of the most technically proficient candidates struggled to apply their knowledge to real-world problems. In contrast, team members trained in critical data literacy–those who understood both the math and the societal context of the models–were better equipped to build responsible, practical tools. They also knew when not to build something.

    As AI becomes more embedded in our lives, and many students feel anxious about AI supplanting their job prospects, critical data literacy is a skill that is not just future-proof–it is future-necessary. Students (and all of us) need the ability to grapple with and think critically about AI and data in their lives and careers, no matter what they choose to pursue. As Milton Johnson, a physics and engineering teacher at Bioscience High School in Phoenix, told me: “AI is going to be one of those things where, as a society, we have a responsibility to make sure everyone has access in multiple ways.”

    Critical data literacy is as much about the humanities as it is about STEM. “AI is not just for computer scientists,” said Karren Boatner, who taught Autumn in her English literature class at Whitney M. Young Magnet High School. For Karren, who hadn’t considered herself a “math person” previously, one of the most surprising parts of the program was how much she and her students enjoyed a game-based module that used middle school math to explain how AI “learns.” Connecting math and literature to culturally relevant, real-world issues helps students see both subjects in a new light.

    As AI continues to reshape our world, schools must rethink how to teach about it. Critical data literacy helps students see the relevance of what they’re learning, empowering them to ask better questions and make more informed decisions. It also helps educators connect classroom content to students’ lived experiences.

    If education leaders want to prepare students for the future–not just as workers, but as informed citizens–they must invest in critical data literacy now. As Angela Nguyen, one of our undergraduate scholars from Stanford, said in her Data is Power talk: “Data is power–especially youth and data. All of us, whether qualitative or quantitative, can be great collectors of meaningful data that helps educate our own communities.”

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Before you click on that incredible deal…

    Before you click on that incredible deal…

    Scammers are everywhere on the internet, masquerading to obtain your personal information. Many social media users or website creators pose as government entities or other authorities to offer you things that seem too good to be true or use scare tactics, like fake warnings about things like late fines or missed court dates, to prompt online users into sharing personal information. 

    In an era of misinformation, how do we know when a website is real? 

    One way is to research a website’s domain. A domain name is the part of a website address preceded by .com, .net or other popular suffixes. It’s essentially just the base website name without the “https://” and “www.”

    “Measuring a website’s credibility might take time,” said Jordan Lyle, a senior reporter for Snopes.com. “Young journalists should know their stuff when it comes to domains and redirects.” 

    Snopes.com is one of the internet’s oldest fact-checking websites. He has more than 25 years of experience in managing websites and knows how to determine whether a site is legit. 

    Investigating internet sites

    Alex Kasprak, a former investigative journalist at Snopes.com, has conducted numerous investigations using information gleaned from Domain Name Server (DNS) registers. DNS registers contain information about a particular website, its URL and IP address — a unique number on every tech device you might use. 

    With the information he found, Kasprak has been able to uncover unreported connections between news websites and their funders and between scammers and their beneficiaries. 

    “DNS tools are a great first step into any investigation that involves the identity of people behind websites or possible undisclosed connections between them,” Kasprak said.

    Taking the expertise from these two investigative reporters, News Decoder has compiled the toolkit below to help perform a credible and comprehensive examination for publishing. 

    Are there red flags?

    Scam websites have certain red flags. They might lack legal documentation, for example, including terms of service and privacy policies. 

    Another sign is sloppiness and mistakes. Try skimming through various pages on the site to look for typos, glaringly incorrect information, vague contact information, skewed formatting and other things that seem unprofessional. 

    Lyle said that a website that promotes a specific giveaway might lack any biographical or contact information about the people promoting the product or offer.

    “Sometimes, scammers will include a mailing address that, upon searching for it, turns out to be a fulfillment center or a business that allows LLCs to anonymously register with that business’ physical office as a virtual address, shielding the scam’s operators from being identified,” Lyle said. 

    Conduct a website domain search.

    Kasprak said that the Internet Corporation for Assigned Names and Numbers (ICANN) operates as a phonebook for the internet.

    “In this analogy, the phone numbers are Internet Protocol (IP) addresses  — a string of numbers formatted like 0.0.0.0 — and the ‘names’ are the actual domain names [e.g. news-decoder.com] to which those IP addresses are associated,” Kasprak said. “Like a human with a phone, domain names can change IP addresses several times.”

    The first step for tracing the origins of a website involves what’s known as a “WHOIS” search — a specific type of domain search listing information about the creation of a domain. 

    WHOIS is a public database that lists several contact numbers, names or organisations associated with a given IP address or domain name. Many people these days use services that allow one to register a website anonymously, making the results have limited value. Older records, or those from some non-Western nations, often include actual names or corporate contacts, explained Kasprak. 

    A WHOIS search, which can be conducted at godaddy.com/whois, queries the public WHOIS database. 

    Lyle said he often looks at the date a person officially purchased and registered a domain name.. “For example, in the case of researching potential scams, if a domain name was recently registered, that’s a red flag indicating the website might be untrustworthy and could confirm the potential scam as legitimate,” he said.

    Look at the site history.

    Another great tool to pair with “WHOIS” searches is the Internet Archive’s Wayback Machine. When performing a “WHOIS” search on godaddy.com/whois, check to see when the domain was created. That year should match the Wayback Machine’s records of creation date, as well as show if the website had other owners with completely different websites. 

    “Also, know that the domain information listed in a WHOIS search might be the most recent data, but not the original data,” Lyle said. “Check the Wayback Machine to see if the website existed long ago in another form.”

    Scammers might also create fake domains to pretend to be a legitimate business, adjusting the URL link slightly to trick users. A fake Home Depot ad on Facebook, for example, didn’t lead to homedepot.com when clicked through, but instead to “h0medepott.com”; an “o” was changed to a zero and a second “t” was added to the end of the URL. 

    “Scammers have created fake domains almost matching the genuine business domain for banks, as well as for USPS, for example,” Lyle said. “Sometimes, scammers won’t even bother to create similar domain names and instead simply rely on people not looking at the URL.” 

    Some scammers go so far as to copy the web design of a company — logo and all — to trick consumers. These types of scam websites often offer giveaways that seem too good to be true, such as free money, super inexpensive offers for goods or services or non-existent programs for student loan forgiveness.

    “Of course, the biggest red flag would be an offer that seems too good to be true,” Lyle said. “If an offer seems too good to be true, it probably is. And I will go a step further: In 2025, if an offer seems too good to be true, it is. Avoid it.”

    For journalists all this should becoming standard practice when using information off the internet in news stories. 

    “Basically, you want to make sure you did everything you could with your research before publishing your article,” Lyle said. “And that you attempted to go above and beyond expectations other publishers might have for their articles’ comprehensive credibility.”


     

    Questions to consider:

    1. What are some common red flags that a website might be fake or trying to scam you?

    2. What is a DNS register and how is it useful to identify a potential scam?

    3. If a friend sent you an unknown link, what steps would you take before clicking? How would you explain your choice to click or not?


     

    Source link

  • Weaving digital citizenship into edtech innovation

    Weaving digital citizenship into edtech innovation

    Key points:

    What happens when over 100 passionate educators converge in Chicago to celebrate two decades of educational innovation? A few weeks ago, I had the thrilling opportunity to immerse myself in the 20th anniversary of the Discovery Educator Network (the DEN), a week-long journey that reignited my passion for transforming classrooms.

    From sunrise to past sunset, my days at Loyola University were a whirlwind of learning, laughter, and relentless exploration. Living the dorm life, forging new connections, and rekindling old friendships, we collectively dove deep into the future of learning, creating experiences that went far beyond the typical professional development.

    As an inaugural DEN member, the professional learning community supported by Discovery Education, I was incredibly excited to return 20 years after its founding to guide a small group of educators through the bountiful innovations of the DEN Summer Institute (DENSI). Think scavenger hunts, enlightening workshops, and collaborative creations–every moment was packed with cutting-edge ideas and practical strategies for weaving technology seamlessly into our teaching, ensuring our students are truly future-ready.

    During my time at DENSI, I learned a lot of new tips and tricks that I will pass on to the educators I collaborate with. From AI’s potential to the various new ways to work together online, participants in this unique event learned a number of ways to weave digital citizenship into edtech innovation. I’ve narrowed them down to five core concepts; each a powerful step toward building future-ready classrooms and fostering truly responsible digital citizens.

    Use of artificial intelligence

    Technology integration: When modeling responsible AI use, key technology tools could include generative platforms like Gemini, NotebookLM, Magic School AI, and Brisk, acting as ‘thought partners’ for brainstorming, summarizing, and drafting. Integration also covers AI grammar/spell-checkers, data visualization tools, and feedback tools for refining writing, presenting information, and self-assessment, enhancing digital content interaction and production.

    Learning & application: Teaching students to ethically use AI is key. This involves modeling critical evaluation of AI content for bias and inaccuracies. For instance, providing students with an AI summary of a historical event to fact-check with credible sources. Students learn to apply AI as a thought partner, boosting creativity and collaboration, not replacing their own thinking. Fact-checking and integrating their unique voices are essential. An English class could use AI to brainstorm plot ideas, but students develop characters and write the narrative. Application includes using AI for writing refinement and data exploration, fostering understanding of AI’s academic capabilities and limitations.

    Connection to digital citizenship: This example predominantly connects to digital citizenship. Teaching responsible AI use promotes intellectual honesty and information literacy. Students can grasp ethical considerations like plagiarism and proper attribution. The “red, yellow, green” stoplight method provides a framework for AI use, teaching students when to use AI as a collaborator, editor, or thought partner–or not at all.This approach cultivates critical thinking and empowers students to navigate the digital landscape with integrity, preparing them as responsible digital citizens understanding AI’s implications.

    Digital communication

    Technology integration: Creating digital communication norms should focus on clarity with visuals like infographics, screenshots, and video clips. Canva is a key tool for a visual “Digital Communication Agreement” defining online interaction expectations. Include student voice by the integration and use of pictures and graphics to illustrate behaviors and potentially collaborative presentation / polling tools for student involvement in norm-setting.

    Learning & application: Establishing clear online interaction norms is the focus of digital communication. Applying clear principles teaches the importance of visuals and setting communication goals. Creating a visual “Digital Communication Agreement” with Canva is a practical application where students define respectful online language and netiquette. An elementary class might design a virtual classroom rules poster, showing chat emojis and explaining “think before you post.” Using screenshots and “SMART goals” for online discussions reinforces learning, teaching constructive feedback and respectful debate. In a middle school science discussion board, the teacher could model a respectful response like “I understand your point, but I’m wondering if…” This helps students apply effective digital communication principles.

    Connection to digital citizenship: This example fosters respectful communication, empathy, and understanding of online social norms. By creating and adhering to a “Digital Communication Agreement,” students develop responsibility for online interactions. Emphasizing respectful language and netiquette cultivates empathy and awareness of their words’ impact. This prepares them as considerate digital citizens, contributing positively to inclusive online communities.

    Content curation

    Technology integration: For understanding digital footprints, one primary tool is Google Drive when used as a digital folder to curate students’ content. The “Tech Toolbox” concept implies interaction with various digital platforms where online presence exists. Use of many tools to curate content allows students to leave traces on a range of technologies forming their collective digital footprint.

    Learning & application: This centers on educating students about their online presence’s permanence and nature. Teaching them to curate digital content in a structured way, like using a Google Drive folder, is key. A student could create a “Digital Portfolio” in Google Drive with online projects, proud social media posts, and reflections on their public identity. By collecting and reviewing online artifacts, students visualize their current “digital footprint.” The classroom “listening tour” encourages critical self-reflection, prompting students to think about why they share online and how to be intentional about their online identity. This might involve students reviewing anonymized social media profiles, discussing the impression given to future employers.

    Connection to digital citizenship: This example cultivates awareness of online permanence, privacy, responsible self-presentation, and reputation management. Understanding lasting digital traces empowers students to make informed decisions. The reflection process encourages the consideration of their footprint’s impact, fostering ownership and accountability for online behavior. This helps them become mindful, capable digital citizens.

    Promoting media literacy

    Technology integration: One way to promote media literacy is by using “Paperslides” for engaging content creation, leveraging cameras and simple video recording. This concept gained popularity at the beginning of the DEN through Dr. Lodge McCammon. Dr. Lodge’s popular 1-Take Paperslide Video strategy is to “hit record, present your material, then hit stop, and your product is done” style of video creation is something that anyone can start using tomorrow. Integration uses real-life examples (likely digital media) to share a variety of topics for any audience. Additionally, to apply “Pay Full Attention” in a digital context implies online viewing platforms and communication tools for modeling digital eye contact and verbal cues.

    Learning & application: Integrating critical media consumption with engaging content creation is the focus. Students learn to leverage “Paperslides” or another video creation method to explain topics or present research, moving beyond passive consumption. For a history project, students could create “Paperslides” explaining World War II causes, sourcing information and depicting events. Learning involves using real-life examples to discern credible online sources, understanding misinformation and bias. A lesson might show a satirical news article, guiding students to verify sources and claims through their storyboard portion. Applying “Pay Full Attention” teaches active, critical viewing, minimizing distractions. During a class viewing of an educational video, students could pause to discuss presenter credentials or unsupported claims, mimicking active listening. This fosters practical media literacy in creating and consuming digital content.

    Connection to digital citizenship: This example enhances media literacy, critical online information evaluation, and understanding persuasive techniques. Learning to create and critically consume content makes students informed, responsible digital participants. They identify and question sources, essential for navigating a digital information-saturated world. This empowers them as discerning digital citizens, contributing thoughtfully to online content.

    Collaborative problem-solving

    Technology integration: For practicing digital empathy and support, key tools are collaborative online documents like Google Docs and Google Slides. Integration extends to online discussion forums (Google Classroom, Flip) for empathetic dialogue, and project management tools (Trello, Asana) for transparent organization. 

    Learning & application: This focuses on developing effective collaborative skills and empathetic communication in digital spaces. Students learn to work together on shared documents, applying a “Co-Teacher or Model Lessons” approach where they “co-teach” each other new tools or concepts. In a group science experiment, students might use a shared Google Doc to plan methodology, with one “co-teaching” data table insertion from Google Sheets. They practice constructive feedback and model active listening in digital settings, using chat for clarification or emojis for feelings. The “red, yellow, green” policy provides a clear framework for online group work, teaching when to seek help, proceed cautiously, or move forward confidently. For a research project, “red” means needing a group huddle, “yellow” is proceeding with caution, and “green” is ready for review.

    Connection to digital citizenship: This example is central to digital citizenship, developing empathy, respectful collaboration, and responsible problem-solving in digital environments. Structured online group work teaches how to navigate disagreements and offers supportive feedback. Emphasis on active listening and empathetic responses helps internalize civility, preparing students as considerate digital citizens contributing positively to online communities.

    These examples offer a powerful roadmap for cultivating essential digital citizenship skills and preparing all learners to be future-ready. The collective impact of thoughtfully utilizing these or similar approaches , or even grab and go resources from programs such as Discovery Education’s Digital Citizenship Initiative, can provide the foundation for a strong academic and empathetic school year, empowering educators and students alike to navigate the digital world with confidence, integrity, and a deep understanding of their role as responsible digital citizens.

    In addition, this event reminded me of the power of professional learning communities.  Every educator needs and deserves a supportive community that will share ideas, push their thinking, and support their professional development. One of my long-standing communities is the Discovery Educator Network (which is currently accepting applications for membership). 

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • The Collaborative AI Classroom: Teaching Students to Work With, Not Against, AI Tools – Faculty Focus

    The Collaborative AI Classroom: Teaching Students to Work With, Not Against, AI Tools – Faculty Focus

    Source link

  • The Collaborative AI Classroom: Teaching Students to Work With, Not Against, AI Tools – Faculty Focus

    The Collaborative AI Classroom: Teaching Students to Work With, Not Against, AI Tools – Faculty Focus

    Source link

  • Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Source link

  • Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Artificial Intelligence and Critical Thinking in Higher Education: Fostering a Transformative Learning Experience for Students – Faculty Focus

    Source link

  • 5 Steps to Update Assignments to Foster Critical Thinking and Authentic Learning in an AI Age – Faculty Focus

    5 Steps to Update Assignments to Foster Critical Thinking and Authentic Learning in an AI Age – Faculty Focus

    Source link

  • Earning Our AI Literacy License – Faculty Focus

    Earning Our AI Literacy License – Faculty Focus

    Source link