Tag: content

  • Conversation and Coursework: Strategies to Engage Undergraduate Students with Course Content – Faculty Focus

    Conversation and Coursework: Strategies to Engage Undergraduate Students with Course Content – Faculty Focus

    Source link

  • Helping students evaluate AI-generated content

    Helping students evaluate AI-generated content

    Key points:

    Finding accurate information has long been a cornerstone skill of librarianship and classroom research instruction. When cleaning up some materials on a backup drive, I came across an article I wrote for the September/October 1997 issue of Book Report, a journal directed to secondary school librarians. A generation ago, “asking the librarian” was a typical and often necessary part of a student’s research process. The digital tide has swept in new tools, habits, and expectations. Today’s students rarely line up at the reference desk. Instead, they consult their phones, generative AI bots, and smart search engines that promise answers in seconds. However, educators still need to teach students the ability to be critical consumers of information, whether produced by humans or generated by AI tools.

    Teachers haven’t stopped assigning projects on wolves, genetic engineering, drug abuse, or the Harlem Renaissance, but the way students approach those assignments has changed dramatically. They no longer just “surf the web.” Now, they engage with systems that summarize, synthesize, and even generate research responses in real time.

    In 1997, a keyword search might yield a quirky mix of werewolves, punk bands, and obscure town names alongside academic content. Today, a student may receive a paragraph-long summary, complete with citations, created by a generative AI tool trained on billions of documents. To an eighth grader, if the answer looks polished and is labeled “AI-generated,” it must be true. Students must be taught how AI can hallucinate or simply be wrong at times.

    This presents new challenges, and opportunities, for K-12 educators and librarians in helping students evaluate the validity, purpose, and ethics of the information they encounter. The stakes are higher. The tools are smarter. The educator’s role is more important than ever.

    Teaching the new core four

    To help students become critical consumers of information, educators must still emphasize four essential evaluative criteria, but these must now be framed in the context of AI-generated content and advanced search systems.

    1. The purpose of the information (and the algorithm behind it)

    Students must learn to question not just why a source was created, but why it was shown to them. Is the site, snippet, or AI summary trying to inform, sell, persuade, or entertain? Was it prioritized by an algorithm tuned for clicks or accuracy?

    A modern extension of this conversation includes:

    • Was the response written or summarized by a generative AI tool?
    • Was the site boosted due to paid promotion or engagement metrics?
    • Does the tool used (e.g., ChatGPT, Claude, Perplexity, or Google’s Gemini) cite sources, and can those be verified?

    Understanding both the purpose of the content and the function of the tool retrieving it is now a dual responsibility.

    2. The credibility of the author (and the credibility of the model)

    Students still need to ask: Who created this content? Are they an expert? Do they cite reliable sources? They must also ask:

    • Is this original content or AI-generated text?
    • If it’s from an AI, what sources was it trained on?
    • What biases may be embedded in the model itself?

    Today’s research often begins with a chatbot that cannot cite its sources or verify the truth of its outputs. That makes teaching students to trace information to original sources even more essential.

    3. The currency of the information (and its training data)

    Students still need to check when something was written or last updated. However, in the AI era, students must understand the cutoff dates of training datasets and whether search tools are connected to real-time information. For example:

    • ChatGPT’s free version (as of early 2025) may only contain information up to mid-2023.
    • A deep search tool might include academic preprints from 2024, but not peer-reviewed journal articles published yesterday.
    • Most tools do not include digitized historical data that is still in manuscript form. It is available in a digital format, but potentially not yet fully useful data.

    This time gap matters, especially for fast-changing topics like public health, technology, or current events.

    4. The wording and framing of results

    The title of a website or academic article still matters, but now we must attend to the framing of AI summaries and search result snippets. Are search terms being refined, biased, or manipulated by algorithms to match popular phrasing? Is an AI paraphrasing a source in a way that distorts its meaning? Students must be taught to:

    • Compare summaries to full texts
    • Use advanced search features to control for relevance
    • Recognize tone, bias, and framing in both AI-generated and human-authored materials

    Beyond the internet: Print, databases, and librarians still matter

    It is more tempting than ever to rely solely on the internet, or now, on an AI chatbot, for answers. Just as in 1997, the best sources are not always the fastest or easiest to use.

    Finding the capital of India on ChatGPT may feel efficient, but cross-checking it in an almanac or reliable encyclopedia reinforces source triangulation. Similarly, viewing a photo of the first atomic bomb on a curated database like the National Archives provides more reliable context than pulling it from a random search result. With deepfake photographs proliferating the internet, using a reputable image data base is essential, and students must be taught how and where to find such resources.

    Additionally, teachers can encourage students to seek balance by using:

    • Print sources
    • Subscription-based academic databases
    • Digital repositories curated by librarians
    • Expert-verified AI research assistants like Elicit or Consensus

    One effective strategy is the continued use of research pathfinders that list sources across multiple formats: books, journals, curated websites, and trusted AI tools. Encouraging assignments that require diverse sources and source types helps to build research resilience.

    Internet-only assignments: Still a trap

    Then as now, it’s unwise to require students to use only specific sources, or only generative AI, for research. A well-rounded approach promotes information gathering from all potentially useful and reliable sources, as well as information fluency.

    Students must be taught to move beyond the first AI response or web result, so they build the essential skills in:

    • Deep reading
    • Source evaluation
    • Contextual comparison
    • Critical synthesis

    Teachers should avoid giving assignments that limit students to a single source type, especially AI. Instead, they should prompt students to explain why they selected a particular source, how they verified its claims, and what alternative viewpoints they encountered.

    Ethical AI use and academic integrity

    Generative AI tools introduce powerful possibilities including significant reductions, as well as a new frontier of plagiarism and uncritical thinking. If a student submits a summary produced by ChatGPT without review or citation, have they truly learned anything? Do they even understand the content?

    To combat this, schools must:

    • Update academic integrity policies to address the use of generative AI including clear direction to students as to when and when not to use such tools.
    • Teach citation standards for AI-generated content
    • Encourage original analysis and synthesis, not just copying and pasting answers

    A responsible prompt might be: “Use a generative AI tool to locate sources, but summarize their arguments in your own words, and cite them directly.”

    In closing: The librarian’s role is more critical than ever

    Today’s information landscape is more complex and powerful than ever, but more prone to automation errors, biases, and superficiality. Students need more than access; they need guidance. That is where the school librarian, media specialist, and digitally literate teacher must collaborate to ensure students are fully prepared for our data-rich world.

    While the tools have evolved, from card catalogs to Google searches to AI copilots, the fundamental need remains to teach students to ask good questions, evaluate what they find, and think deeply about what they believe. Some things haven’t changed–just like in 1997, the best advice to conclude a lesson on research remains, “And if you need help, ask a librarian.”

    Steven M. Baule, Ed.D., Ph.D.
    Latest posts by Steven M. Baule, Ed.D., Ph.D. (see all)

    Source link

  • Director of Content and Product Strategy at UM

    Director of Content and Product Strategy at UM

    For my newest “Featured Gig” installment, I want to highlight the search for a director of content and product strategy at the Center for Academic Innovation at the University of Michigan. Sarah Dysart, chief learning officer at CAI, agreed to answer my questions about the role.

    If you have a job at the intersection of learning, organizational change and technology that you are recruiting for, please get in touch!

    Q: What is the university’s mandate behind this role? How does it help align with and advance the university’s strategic priorities?

    A: The University of Michigan has long staked its reputation on research excellence and public purpose. Now we’re doubling down on scale, access and impact—transforming how learning reaches people across every stage of life, across the globe. Life-changing education is one of four core impact areas within the University of Michigan’s Vision 2034, and the person in the director of content and product strategy role will support this strategic work.

    As Michigan accelerates its investment in digital learning, this person leads the charge: shaping and guiding a dynamic portfolio of educational products—online courses, certificates, degree programs, short-form learning experiences and beyond—that don’t merely mirror the classroom, but reimagine what learning can be. This role calls for both vision and precision, bringing together academic imagination, bold experimentation and the ability to turn ideas into action. The director will steer faculty ideas and institutional goals into cohesive, high-impact offerings that reflect the university’s boldest ambitions for learning at scale.

    Q: Where does the role sit within the university structure? How will the person in this role engage with other units and leaders across campus?

    A: This director role sits within the Center for Academic Innovation, operating at the intersection of ideas and implementation. The individual will collaborate closely with experts in learning design, media production, marketing, operations and research. But the real action is in the connections across campus.

    Michigan’s schools and colleges host a vast breadth and depth of faculty expertise, and this role thrives on cross-campus collaboration—partnering with academic unit leaders, faculty and staff to co-create offerings that extend U-M’s mission far beyond Ann Arbor. Drawing on insights about learner demand and market opportunity, the director will guide faculty in selecting content areas and product types with the greatest potential, translating an idea sketched on a whiteboard into a course reaching learners across the globe.

    Q: What would success look like in one year? Three years? Beyond?

    A: In one year, the new director has helped identify and launch a diverse set of online learning offerings that reflect Michigan’s distinctive strengths. Relationships are strong, internal workflows are humming and early results show promising reach and impact.

    In three years, the content portfolio resembles a greatest hits playlist for lifelong learners—diverse, well-balanced and deeply mission-aligned. It’s something learners want to come back and engage with, time and time again. Offerings address workforce needs, social challenges and global opportunity. Faculty are eager to collaborate. Partners are eager to invest.

    Beyond that, success means transformation. The University of Michigan is recognized not just for what it teaches, but for how it reimagines teaching. Our educational offerings reach far beyond campus, connecting with learners across industries, geographies and life stages. This individual has played a key part in turning a world-class university into a truly global learning institution.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: We’re looking for someone who wants to shape what’s next—not just for learners, but for institutions. The director of content and product strategy will develop a rare blend of skills: the ability to lead across academic and operational contexts, to translate vision into scalable experiences, and to steward innovation with both purpose and precision.

    From here, a person might go on to lead teaching and learning strategy at an institutional level, head up a center for innovation or lifelong learning, or take on an executive role at an organization working to expand access to education globally. Alternatively, one might pivot toward product leadership in mission-driven companies or foundations, applying their experience to broader systems change.

    This role builds expertise and a portfolio not just of educational content—but of influence, insight and lasting impact.

    Source link

  • Meta’s content moderation changes closely align with FIRE recommendations

    Meta’s content moderation changes closely align with FIRE recommendations

    On Tuesday, Meta* CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan announced sweeping changes to the content moderation policies at Meta (the owner of Facebook, Instagram, and Threads) with the stated intention of improving free speech and reducing “censorship” on its platforms. The changes simplify policies, replace the top-down fact-checking with a Community Notes-style system, reduce opportunities for false positives in automatic content flagging, and allow for greater user control of content feeds. All these changes mirror recommendations FIRE made in its May 2024 Report on Social Media.

    Given Meta’s platforms boast billions of users, the changes, if implemented, have major positive implications for free expression online.

    FIRE’s Social Media Report

    FIRE Report on Social Media 2024

    Reports

    With as many as 5.17 billion accounts worldwide, social media is the most powerful tool in history for average citizens to express themselves.


    Read More

    In our report, we promoted three principles to improve the state of free expression on social media:

    1. The law should require transparency whenever the government involves itself in social media moderation decisions.
    2. Content moderation policies should be transparent to users, who should be able to appeal moderation decisions that affect them.
    3. Content moderation decisions should be unbiased and should consistently apply the criteria that a platform’s terms of service establish.

    Principle 1 is the only one where FIRE believes government intervention is appropriate and constitutional (and we created a model bill to that effect). Principles 2 and 3 we hoped would enjoy voluntary adoption by social media platforms that wanted to promote freedom of expression. 

    While we don’t know whether these principles influenced Meta’s decision, we’re pleased the promised changes align very well with FIRE’s proposals for how a social media platform committed to free expression could put that commitment into practice.

    Meta’s changes to content moderation structures

    With a candid admission that it believes 10-20% of its millions of daily content removals are mistakes, Meta announced it is taking several actions to expand freedom of expression on the platform. The first is simplification and scaling back of its rules on the boundaries of discourse. According to Zuckerberg and Kaplan:

    [Meta is] getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented. 

    While this is promising in and of itself, it will be enhanced by a broad change to the automated systems for content moderation. Meta is restricting its automated flagging to only the most severe policy violations. For lesser policy violations, a user will have to manually report a post for review and possible removal. Additionally, any removal will require the agreement of multiple human reviewers.

    This is consistent with our argument that AI-driven and other automated flagging systems will invariably have issues with false-positives, making human review critical. Beyond removals, Meta is increasing the confidence threshold required for deboosting a post suspected of violating policy.

    Who fact-checks the fact checkers?

    Replacing top-down fact-checking with a bottom-up approach based on X’s Community Notes feature may be just about the biggest change announced by Meta. As FIRE noted in the Social Media Report: 

    Mark Zuckerberg famously said he didn’t want Facebook to be the “arbiter of truth.” But, in effect, through choosing a third-party fact checker, Facebook becomes the arbiter of the arbiter of truth. Given that users do not trust social media platforms, this is unlikely to engender trust in the accuracy of fact checks.

    Zuckerberg similarly said in the announcement that Meta’s“fact checkers have just been too politically biased, and have destroyed more trust than they’ve created.” 

    Our Social Media Report argued that the Community Notes feature is preferable to top-down fact-checking, because a community of diverse perspectives will likely be “less vulnerable to bias and easier for users to trust than top-down solutions that may reflect the biases of a much smaller number of stakeholders.” Additionally, we argued labeling is more supportive of free expression, being a “more speech” alternative to removal and deboosting.

    We are eager to see the results of this shift. At a minimum, experimentation and innovation in content moderation practices provides critical experience and data to guide future decisions and help platforms improve reliability, fairness, and responsiveness to users.

    User trust and the appearance of bias

    An overall theme in Zuckerberg and Kaplan’s remarks is that biased decision-making has eroded user trust in content moderation at Meta, and these policy changes are aimed at regaining users’ trust. As FIRE argued in our Social Media Report:

    In the case of moderating political speech, any platform that seeks to promote free expression should develop narrow, well-defined, and consistently enforceable rules to minimize the kind of subjectivity that leads to arbitrary and unfair enforcement practices that reduce users’ confidence both in platforms and in the state of free expression online.

    We also argued that perception of bias and flexibility in rules encourages powerful entities like government actors to “work the refs,” including through informal pressure, known as “jawboning.”

    What is jawboning? And does it violate the First Amendment?

    Issue Pages

    Indirect government censorship is still government censorship — and it must be stopped.


    Read More

    Additionally, when perceived bias drives users to small, ideologically homogeneous alternative platforms, the result can damage broader discourse:

    If users believe their “side” is censored unfairly, many will leave that platform for one where they believe they’ll have more of a fair shake. Because the exodus is ideological in nature, it will drive banned users to new platforms where they are exposed to fewer competing ideas, leading to “group polarization,” the well-documented phenomenon that like-minded groups become more extreme over time. Structures on all social media platforms contribute to polarization, but the homogeneity of alternative platforms turbocharges it.

    These are real problems, and it is not clear whether Meta’s plans will succeed in addressing them, but it is welcome to see them recognized.

    International threats to speech

    Our Social Media Report expressed concern that the Digital Services Act — the broad EU regulation mandating censorship on social media far beyond what U.S. constitutional law allows — would become a least common denominator approach for social media companies, even in the United States. Mark Zuckerberg seems to announce his intention to do no such thing, stating he planned to work with President Trump to push back on “governments around the world” that are “pushing [companies] to censor more.”

    While we are pleased at the implication that Meta’s platforms will seemingly not change their free expression policies in America at the behest of the EU, the invocation of a social media company working with any government, including the United States government, rings alarm bells for any civil libertarian. We will watch this development closely for that reason. 

    FIRE has often said — and it often bears repeating — the greatest threat to freedom of expression will always come from the government, and as Zuckerberg himself notes, the government has in years past pushed Meta to remove content.

    When the rubber meets the road

    Meta’s commitment to promote freedom of expression on its platforms offers plenty of reasons for cautious optimism. 

    But we do want to emphasize caution. There is, with free expression, often a large gap between stated intentions and what happens when theory meets practice. As a civil liberties watchdog, our duty is to measure promise against performance.

    Take, for example, our measured praise for Elon Musk’s stated commitment to free expression, followed by our frequent criticism when he failed to live up to that commitment. And that criticism hasn’t kept us from giving credit when due to X, such as when it adopted Community Notes. 

    Similarly, FIRE stands ready to help Meta live up to its stated commitments to free expression. You can be sure that we will watch closely and hold them accountable.

    * Meta has donated to FIRE.

    Source link