Tag: Evaluate

  • Helping students evaluate AI-generated content

    Helping students evaluate AI-generated content

    Key points:

    Finding accurate information has long been a cornerstone skill of librarianship and classroom research instruction. When cleaning up some materials on a backup drive, I came across an article I wrote for the September/October 1997 issue of Book Report, a journal directed to secondary school librarians. A generation ago, “asking the librarian” was a typical and often necessary part of a student’s research process. The digital tide has swept in new tools, habits, and expectations. Today’s students rarely line up at the reference desk. Instead, they consult their phones, generative AI bots, and smart search engines that promise answers in seconds. However, educators still need to teach students the ability to be critical consumers of information, whether produced by humans or generated by AI tools.

    Teachers haven’t stopped assigning projects on wolves, genetic engineering, drug abuse, or the Harlem Renaissance, but the way students approach those assignments has changed dramatically. They no longer just “surf the web.” Now, they engage with systems that summarize, synthesize, and even generate research responses in real time.

    In 1997, a keyword search might yield a quirky mix of werewolves, punk bands, and obscure town names alongside academic content. Today, a student may receive a paragraph-long summary, complete with citations, created by a generative AI tool trained on billions of documents. To an eighth grader, if the answer looks polished and is labeled “AI-generated,” it must be true. Students must be taught how AI can hallucinate or simply be wrong at times.

    This presents new challenges, and opportunities, for K-12 educators and librarians in helping students evaluate the validity, purpose, and ethics of the information they encounter. The stakes are higher. The tools are smarter. The educator’s role is more important than ever.

    Teaching the new core four

    To help students become critical consumers of information, educators must still emphasize four essential evaluative criteria, but these must now be framed in the context of AI-generated content and advanced search systems.

    1. The purpose of the information (and the algorithm behind it)

    Students must learn to question not just why a source was created, but why it was shown to them. Is the site, snippet, or AI summary trying to inform, sell, persuade, or entertain? Was it prioritized by an algorithm tuned for clicks or accuracy?

    A modern extension of this conversation includes:

    • Was the response written or summarized by a generative AI tool?
    • Was the site boosted due to paid promotion or engagement metrics?
    • Does the tool used (e.g., ChatGPT, Claude, Perplexity, or Google’s Gemini) cite sources, and can those be verified?

    Understanding both the purpose of the content and the function of the tool retrieving it is now a dual responsibility.

    2. The credibility of the author (and the credibility of the model)

    Students still need to ask: Who created this content? Are they an expert? Do they cite reliable sources? They must also ask:

    • Is this original content or AI-generated text?
    • If it’s from an AI, what sources was it trained on?
    • What biases may be embedded in the model itself?

    Today’s research often begins with a chatbot that cannot cite its sources or verify the truth of its outputs. That makes teaching students to trace information to original sources even more essential.

    3. The currency of the information (and its training data)

    Students still need to check when something was written or last updated. However, in the AI era, students must understand the cutoff dates of training datasets and whether search tools are connected to real-time information. For example:

    • ChatGPT’s free version (as of early 2025) may only contain information up to mid-2023.
    • A deep search tool might include academic preprints from 2024, but not peer-reviewed journal articles published yesterday.
    • Most tools do not include digitized historical data that is still in manuscript form. It is available in a digital format, but potentially not yet fully useful data.

    This time gap matters, especially for fast-changing topics like public health, technology, or current events.

    4. The wording and framing of results

    The title of a website or academic article still matters, but now we must attend to the framing of AI summaries and search result snippets. Are search terms being refined, biased, or manipulated by algorithms to match popular phrasing? Is an AI paraphrasing a source in a way that distorts its meaning? Students must be taught to:

    • Compare summaries to full texts
    • Use advanced search features to control for relevance
    • Recognize tone, bias, and framing in both AI-generated and human-authored materials

    Beyond the internet: Print, databases, and librarians still matter

    It is more tempting than ever to rely solely on the internet, or now, on an AI chatbot, for answers. Just as in 1997, the best sources are not always the fastest or easiest to use.

    Finding the capital of India on ChatGPT may feel efficient, but cross-checking it in an almanac or reliable encyclopedia reinforces source triangulation. Similarly, viewing a photo of the first atomic bomb on a curated database like the National Archives provides more reliable context than pulling it from a random search result. With deepfake photographs proliferating the internet, using a reputable image data base is essential, and students must be taught how and where to find such resources.

    Additionally, teachers can encourage students to seek balance by using:

    • Print sources
    • Subscription-based academic databases
    • Digital repositories curated by librarians
    • Expert-verified AI research assistants like Elicit or Consensus

    One effective strategy is the continued use of research pathfinders that list sources across multiple formats: books, journals, curated websites, and trusted AI tools. Encouraging assignments that require diverse sources and source types helps to build research resilience.

    Internet-only assignments: Still a trap

    Then as now, it’s unwise to require students to use only specific sources, or only generative AI, for research. A well-rounded approach promotes information gathering from all potentially useful and reliable sources, as well as information fluency.

    Students must be taught to move beyond the first AI response or web result, so they build the essential skills in:

    • Deep reading
    • Source evaluation
    • Contextual comparison
    • Critical synthesis

    Teachers should avoid giving assignments that limit students to a single source type, especially AI. Instead, they should prompt students to explain why they selected a particular source, how they verified its claims, and what alternative viewpoints they encountered.

    Ethical AI use and academic integrity

    Generative AI tools introduce powerful possibilities including significant reductions, as well as a new frontier of plagiarism and uncritical thinking. If a student submits a summary produced by ChatGPT without review or citation, have they truly learned anything? Do they even understand the content?

    To combat this, schools must:

    • Update academic integrity policies to address the use of generative AI including clear direction to students as to when and when not to use such tools.
    • Teach citation standards for AI-generated content
    • Encourage original analysis and synthesis, not just copying and pasting answers

    A responsible prompt might be: “Use a generative AI tool to locate sources, but summarize their arguments in your own words, and cite them directly.”

    In closing: The librarian’s role is more critical than ever

    Today’s information landscape is more complex and powerful than ever, but more prone to automation errors, biases, and superficiality. Students need more than access; they need guidance. That is where the school librarian, media specialist, and digitally literate teacher must collaborate to ensure students are fully prepared for our data-rich world.

    While the tools have evolved, from card catalogs to Google searches to AI copilots, the fundamental need remains to teach students to ask good questions, evaluate what they find, and think deeply about what they believe. Some things haven’t changed–just like in 1997, the best advice to conclude a lesson on research remains, “And if you need help, ask a librarian.”

    Steven M. Baule, Ed.D., Ph.D.
    Latest posts by Steven M. Baule, Ed.D., Ph.D. (see all)

    Source link

  • Policy Proposals Lack Clarity About How to Evaluate Graduates’ Additional Degrees

    Policy Proposals Lack Clarity About How to Evaluate Graduates’ Additional Degrees

    Title: Accounting for Additional Credentials in Postsecondary Earnings Data

    Authors: Jason Delisle, Jason Cohn, and Bryan Cook

    Source: The Urban Institute

    As policymakers across both parties consider how to evaluate postsecondary outcomes and earnings data, the authors of a new brief from the Urban Institute pose a major question: How should students who earn multiple credentials be included in data collection for the college that awarded their first degree?

    For example, should the earnings of a master’s degree recipient be included in the data for the institution where they earned their bachelor’s degree? Additionally, students who finish an associate degree at a community college are likely to earn higher wages when they complete a bachelor’s degree at another institution. Thus, multiple perspectives need to be considered to help both policymakers and institutions understand, interpret, and treat additional degrees earned.

    Additional key findings include:

    Earnings Data and Accountability Policies

    Many legislative proposals would expand the use of earnings data to provide further accountability and federal aid restrictions. For example, the House Republicans’ College Cost Reduction Act, proposed in 2024, would put institutions at risk of losing funding if they have low student loan repayment rates. The brief’s authors state that the bill does not indicate if students who earn additional credentials should be included in the cohort of students where they completed their first credential.

    The recently implemented gainful employment rule from the Biden administration is explicit in its inclusion of those who earn additional credentials. Under the rule, students who earn an additional degree are included in both calculations for their recent degree and the program that awarded their first credential.

    How Much Do Additional Credential Affect Earnings Data?

    Determining how much additional credentials affect wages and earnings for different programs is difficult. The first earnings measurement—the first year after students leave school—is usually too early to include additional income information from a second credential.

    Although the entire data picture is lacking, a contrast between first- and fifth-year earnings suggests that the number of students earning additional degrees may be very high for some programs. As an example, students who earn associate degrees in liberal arts and general studies often have some of their quickest increases in earnings during these first five years. A potential explanation is because students are then completing a bachelor’s degree program at a four-year institution.

    Policy Implications: How Should Earnings Data Approach Subsequent Credentials?

    In general, it seems that many policymakers have not focused on this complicated question of students who earn additional degrees. However, policy and data professionals may benefit from excluding students who earn additional credentials to more closely measure programs’ return on investment. This can be especially helpful when examining the costs of bachelor’s programs and their subsequent earnings benchmarks, by excluding additional earnings premiums generated from master’s programs.

    Additionally, excluding students who earn additional credentials may be particularly valuable to students in making consumer and financial aid decisions if the payoff from a degree is extremely different depending on whether students pursue an additional credential.

    However, some programs are intended to prepare students for an additional degree, and excluding data for students who earn another degree would mean excluding most graduates and paint a misleading picture.

    To read the full report from the Urban Institute, click here.

    —Austin Freeman


    If you have any questions or comments about this blog post, please contact us.

    Source link