Tag: content

  • Top Hat Unveils AI-Powered Content Enhancer to Fuel Title II Accessibility Compliance

    Top Hat Unveils AI-Powered Content Enhancer to Fuel Title II Accessibility Compliance

    New capabilities in Top Hat Ace enable educators to quickly and easily transform static course materials into accessible, interactive content.

    TORONTO – October 28, 2025 – Top Hat, the leader in student engagement solutions for higher education, today announced the launch of a powerful new accessibility tool in its AI-powered assistant, Ace. Ace Content Enhancer gives faculty the ability to upload existing course materials into Top Hat and receive actionable guidance to meet WCAG 2.1 AA accessibility standards with minimal effort.

    Following the U.S. Department of Justice’s 2024 Title II ruling, public colleges and universities must ensure all digital content meets WCAG 2.1 AA standards as early as April 2026, depending on institution size. But for most professors, the path to compliance is anything but clear. The rules are highly technical, and without dedicated time or training, it can be challenging to ensure materials are fully compliant. Ace Content Enhancer removes this burden by scanning materials in Top Hat in seconds, identifying issues, and providing recommendations to help content meet the standards for accessibility outlined under Title II.

    “We’re helping educators meet this moment by simplifying compliance and making it easier to create learning experiences that serve all students,” said Maggie Leen, CEO of Top Hat. “More than meeting a mandate, this is an opportunity to create content that’s more engaging, and ultimately more effective in supporting student success.”

    A faster, simpler path to compliant courseware

    With Ace’s AI-powered Content Enhancer, faculty can:

    • Scan materials for accessibility issues instantly. Uploaded or existing content in Top Hat is analyzed in seconds, with specific accessibility concerns in text and images flagged for quick review.
    • Remediate with ease. Recommendations and features like auto-generated alt-text remove guesswork and save time.
    • Improve clarity for all learners. Suggested tone helps make content easier to understand and more effective.
    • Make content more relevant. Use Ace to generate real-world examples tailored to students’ interests, academic goals, or backgrounds to boost engagement.
    • Reinforce learning through practice. Ace will suggest interactive, low-stakes questions to deepen understanding and support active learning.

    “Educators retain full control of their content, while Ace eliminates the guesswork, making accessibility improvements fast, intuitive, and aligned with instructional goals,” said Hong Bui, Chief Product Officer at Top Hat. “We’re providing a guided path forward so that accessibility doesn’t come at the expense of interactivity, creativity, or sound pedagogy.”

    The launch of Ace Content Enhancer reflects Top Hat’s broader commitment to accessibility. It builds on existing capabilities—like automatic transcription of slide content—and reinforces the company’s focus on ensuring all student-facing tools and experiences, across web and mobile, meet WCAG 2.1 AA standards, including readings, assessments, and interactive content.

    About Top Hat

    As the leader in student engagement solutions for higher education, Top Hat enables educators to employ evidence-based teaching practices through interactive content, tools, and activities in in-person, online and hybrid classroom environments. Thousands of faculty at more than 1,500 North American colleges and universities use Top Hat to create personalized, engaging and accessible learning experiences for students before, during, and after class. To learn more, please visit tophat.com.

    Source link

  • The New Higher Ed SEO Playbook: Content Ecosystems for the AI Era

    The New Higher Ed SEO Playbook: Content Ecosystems for the AI Era

    Imagine a prospective student asking an AI, “Which colleges offer the best online MBA for working parents?

    Instead of matching keywords, the AI delivers an answer drawn from credible, connected content that blends facts, context, and intent to guide the decision.

    For higher ed leaders, this represents a major shift. Institutions that adapt will earn greater visibility in search, attract more qualified prospective students, and convert curiosity into enrollment growth. The old playbook of targeting single, high-volume keywords just isn’t enough anymore.

    AI-driven search rewards comprehensive, connected, and trustworthy content ecosystems, and institutions that embrace this approach will be the ones students find first. 

    The AI search shift in higher ed 

    Traditional search engine optimization (SEO) rewarded institutions that could identify the right keywords, create targeted pages, and build backlinks. But generative AI and conversational search have changed the rules of the game. 

    Here’s what’s different now: 

    • From keywords to context: AI search models don’t just match words — they interpret meaning and intent, returning results that connect related topics and concepts. 
    • Authority signals matter more: AI favors sources that consistently provide accurate, in-depth information across multiple touchpoints. 
    • Content is interconnected: A single page doesn’t win on its own. Its value depends on how it fits within the institution’s broader web presence. 

    This shift also raises the bar for internal collaboration. Marketing, enrollment, and IT can no longer work in silos. AI search success depends on shared strategy, consistent messaging, and coordinated execution. 

    The takeaway? Institutions need to stop thinking about SEO as an isolated marketing tactic and start treating it as part of a broader content ecosystem. 

    Why a content ecosystem beats keyword lists 

    A content ecosystem is the interconnected network of program pages, admissions information, faculty bios, student stories, news, and resources — all working together to answer your audiences’ questions. 

    It’s the difference between a brochure and a campus tour. A brochure offers quick facts; a tour immerses prospects in faculty, classrooms, student life, and services—building a fuller, more confident picture. 

    A keyword list is the brochure. A content ecosystem is the tour — immersive, connected, and designed to guide prospects from curiosity to commitment. 

    When built intentionally, a content ecosystem gives institutions three clear advantages in today’s AI-driven search environment: 

    Increased relevance 

    AI search tools don’t look at a single page in isolation; they interpret the relationships between topics across your domain. Internally linked, topic-rich pages show the depth of your expertise and help algorithms recommend your institution for nuanced, conversational queries. 

    Example: A prospective student searching “flexible RN-to-BSN options for full-time nurses” is more likely to find you if your nursing program page is connected to articles on nursing career paths, flexible modality, and student success stories. 

    Compounding authority that builds lasting trust

    Authority isn’t built from one or two high-performing pages. It’s earned when every part of your online presence reinforces your credibility. Program descriptions, faculty bios, and testimonials must align in tone, accuracy, and quality. Outdated or inconsistent details can quickly erode the trust signals AI uses to rank content. 

    Conversion that’s built in 

    A keyword list may bring someone to your site, but a content ecosystem keeps them there and moves them closer to action. When visitors can move seamlessly from an informational blog to a program page to an application guide or chat with an advisor, conversion becomes a natural next step. 

    The most effective ecosystems are living assets — constantly updated, monitored, and optimized to reflect evolving programs and audience needs. For institutions looking to compete in an AI-powered search landscape, that adaptability is the real competitive advantage. 

    Is Your Website Built for AI Search?

    Get a personalized AI Readiness Assessment that identifies gaps, surfaces opportunities, and helps build a digital content strategy that meets the moment.

    How to build an AI-ready content ecosystem 

    At Collegis, we help institutions take a holistic approach that bridges marketing, enrollment, and IT. Here’s how we see it coming together: 

    1. Gather actionable data insights 

    Don’t just chase the most-searched terms. Look at historical enrollment, inquiry trends, and page performance to identify the queries that actually lead to applications and registrations, not just clicks. 

    2. Map content to the student journey 

    From the first touchpoint to enrollment, every content asset should serve a clear purpose: 

    • Top of funnel: Informational articles, career outlooks, program overviews 
    • Middle of funnel: Financial aid resources, student success stories, faculty profiles 
    • Bottom of funnel: Application guides, event sign-ups, chat support 

    Linking these pieces guides prospective students through the decision process seamlessly. 

    3. Optimize for AI discoverability 

    Structured data, schema markup, and well-organized site architecture make it easier for AI tools to interpret and recommend your content. Accuracy and consistency are critical — outdated program descriptions or conflicting statistics can undermine authority signals. 

    4. Create continuous feedback loops 

    The work doesn’t stop at publishing. Monitor how content performs in both traditional and AI search, then feed those insights back into planning. AI search algorithms evolve, and so should your content strategy. 

    Turning visibility into meaningful enrollment growth

    AI search is changing how students discover institutions, and how institutions must present themselves online. It’s no longer enough to appear in search results. You need to appear as the most authoritative, most relevant, and most trustworthy source for the questions that matter to prospective students. 

    By building an AI-ready content ecosystem, colleges and universities can meet this challenge head-on, earning not just visibility but the confidence and interest of future learners. 

    Collegis partners with colleges and universities to design content strategies that aren’t just visible, they’re built to convert and scale across the entire student lifecycle. 

    Ready to see how your institution stacks up in the age of AI search?

    Request your AI Readiness Assessment to receive a personalized report outlining your institution’s digital strengths, content gaps, and practical next steps to boost visibility and engagement. It’s your roadmap to staying competitive in an AI-first search landscape.

    Innovation Starts Here

    Higher ed is evolving — don’t get left behind. Explore how Collegis can help your institution thrive.

    Source link

  • Supreme Court case upholding age-verification for online adult content newly references ‘partially protected speech,’ gives it lesser First Amendment scrutiny

    Supreme Court case upholding age-verification for online adult content newly references ‘partially protected speech,’ gives it lesser First Amendment scrutiny

    In Free Speech Coalition v. Paxton, the U.S. Supreme Court broke new ground in applying relaxed First Amendment scrutiny to state-imposed burdens on lawful adult access to obscene-for-minors content. The decision appeared outcome-driven to uphold laws that require websites with specified amounts of sexually explicit material to verify users’ ages. However, the Court indicated the holding applies only “to the extent the State seeks only to verify age,” such that, if handled in a principled manner, FSC v. Paxton should have relevance only for speech to which minors’ access may be constitutionally restricted.

    FSC v. Paxton involved Texas HB 1181’s mandate that online services use “reasonable age verification methods” to ensure those granted access are adults if more than a third of the site’s content is “sexual material harmful to minors,” which the Court treated as content First Amendment law defines as “obscene for minors.” If an adult site knowingly fails to age-verify, Texas’ attorney general may recover civil penalties of up to $10,000 per day, and $250,000 if a minor actually accesses pornographic content. HB 1811 is one of over 20 state adult-content age-verification laws recently passed or enacted.

    Obscenity is among the few categories of speech the First Amendment doesn’t protect. In 1973’s Miller v. California, the Court defined obscenity as speech that (1) taken as a whole appeals primarily to a “prurient interest” in sex (i.e., morbid, unhealthy fixation with it); (2) depicts or describes sexual or excretory conduct in ways patently offensive under contemporary community standards; and (3) taken as a whole, lacks serious literary, artistic, political, or scientific value. The Court has limited the test’s scope to what it calls “hardcore pornography.” Material that is “obscene for minors” is that which satisfies the Miller test as adjusted to minors. Sexually explicit material can thus be obscene for minors but fully protected for adults.

    Under these tests, the government may ban obscene speech and restrict access by those under 18 to speech that is “obscene for minors,” but it cannot cut off adults’ access to non-obscene sexual material.

    It’s long been accepted that, to access adult, potentially obscene-for-minors material in the physical world, showing identification to prove age may be required. So, a law requiring ID to access such content online might seem analogous on its face.

    But online age-verification imposes risks physical ID checks do not. An adult bookstore clerk doesn’t save a photocopy of your license or track the content you access. Nor will hackers, therefore, try to access the ID. These are just some of the reasons surveys consistently show a majority of Americans do not want to provide ID to access online speech — whether adult material or other content, like social media.

    Texas’ HB 1181 is similar to two federal statutes the Supreme Court invalidated around the turn of the millennium. In 1997, the Court in Reno v. ACLU unanimously struck down portions of the Communications Decency Act that criminalized transmitting “obscene or indecent” content. And in 2002’s Ashcroft v. ACLU, it considered whether the Child Online Protection Act violated the First Amendment in seeking to prevent children’s access to “material harmful to minors” in a way that incorporated age verification.

    For decades, the Court has held statutes that regulate speech based on its content must withstand judicial review under strict scrutiny, which requires the government to demonstrate that the law is necessary to serve a compelling government interest and is narrowly tailored to achieve it using the “least restrictive means.” For laws restricting access to online speech, the Court held the laws in Reno and Ashcroft unconstitutional because they failed strict scrutiny. These cases followed in the footsteps of Sable Communications vs. FCC (1989) and United States v. Playboy (2000), in which the Court applied strict scrutiny to invalidate laws governing adult material transmitted by phone and on cable television stations, respectively.

    But in FSC v. Paxton, the Court subjected Texas’ age-verification law for online adult content to only intermediate scrutiny. Under this standard of review, a speech regulation survives if it addresses an important government interest unrelated to suppression of speech, directly advances that interest in a direct and material way, and does not burden substantially more speech than necessary. The Court justified applying a lower level of scrutiny on the ground that minors have no First Amendment right to access speech that is obscene to them. Accordingly, it reasoned, even if adults have the right to access “obscene for minors” material, it is “not fully protected speech.” From there, the Court concluded that “no person — adult or child — has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” And it upheld the Texas law under intermediate scrutiny, concluding the regulations only incidentally restrict speech that can be accessed by adults.

    The upshot is, going forward, it will be easier to justify laws restricting minors’ access to off-limits expression even if the law burdens adults’ access to material that is otherwise lawful for them.

    At the same time, the majority opinion sought to limit the type of content that can be restricted only to material that meets the legal definition of “obscene-for-minors” material, and not anything that might be considered generally inappropriate.

    As the Court held in Brown v. Entertainment Merchants Assn. (2011), “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” And in Reno, which involved similar attempts to limit provision of online content to minors, the Court held the government could not ban “patently offensive” and “indecent” (but not obscene) material for everyone in the name of protecting children.

    Free Speech Coalition should not be read as approving age verification laws for online speech generally that do not specifically target “obscene for minors” material. Its narrow focus will not support the recent spate of social media age-verification laws that have met significant judicial disapproval. Such laws have been enjoined in Arkansas, Mississippi, California, Utah, Texas, Ohio, Indiana, Florida, and most recently last week, when a federal court held Georgia’s version “highly likely [to] be unconstitutional” because it interferes with minors’ rights “to engage in protected speech activities.”

    Thus, properly understood, FSC v. Paxton should have limited implications — including that it shouldn’t extend to general age-verification laws in the social media context.

    The risk, of course, is that governments will seek to leverage FSC v. Paxton decision beyond its limited holding, and/or that lower courts will misuse it, to justify prohibiting or regulating protected speech other than that obscene as to minors. In defending laws that implicate the First Amendment, the government often argues it is regulating only conduct, or unprotected speech, or speech “incidental” to criminal conduct.

    Courts for the most part have seen through these attempts at evasion, and where a speech regulation applies based on topic discussed or idea or message expressed, or cannot be justified without reference to its function or content, courts apply strict scrutiny. Under FSC, however, would-be regulators have another label they can use — “partially protected speech” — and the hope that invoking it will lead to intermediate scrutiny.

    Only time will tell if the Court will keep the starch in its First Amendment standards notwithstanding what should be the purple cow of FSC v. Paxton.

    Source link

  • FIRE statement on Free Speech Coalition v. Paxton upholding age verification for adult content

    FIRE statement on Free Speech Coalition v. Paxton upholding age verification for adult content

    Today, the Supreme Court ruled 6-3 to uphold Texas’s age-verification law for sites featuring adult content. The decision in Free Speech Coalition v. Paxton effectively reverses decades of Supreme Court precedent that protects the free speech rights of adults to access information without jumping over government age-verification hurdles.

    FIRE filed an amicus brief in the case, arguing that free expression “requires vigilant protection, and the First Amendment doesn’t permit short cuts.” FIRE believes that the government’s efforts to restrict adults’ access to constitutionally protected information must be carefully tailored, and that Texas’ law failed to do so. 

    The following statement can be attributed to FIRE Chief Counsel Bob Corn-Revere


    Today’s ruling limits American adults’ access to only that speech which is fit for children — unless they show their papers first.

    After today, adults in the State of Texas must upload sensitive information to access speech that the First Amendment fully protects for them. This wrongheaded, invasive result overturns a generation of precedent and sacrifices anonymity and privacy in the process.

    Data breaches are inevitable. How many will it take before we understand the threat today’s ruling presents?

    Americans will live to regret the day we let the government condition access to protected speech on proof of our identity. FIRE will fight nationwide to ensure that this erosion of our rights goes no further. 

    Source link

  • Conversation and Coursework: Strategies to Engage Undergraduate Students with Course Content – Faculty Focus

    Conversation and Coursework: Strategies to Engage Undergraduate Students with Course Content – Faculty Focus

    Source link

  • Helping students evaluate AI-generated content

    Helping students evaluate AI-generated content

    Key points:

    Finding accurate information has long been a cornerstone skill of librarianship and classroom research instruction. When cleaning up some materials on a backup drive, I came across an article I wrote for the September/October 1997 issue of Book Report, a journal directed to secondary school librarians. A generation ago, “asking the librarian” was a typical and often necessary part of a student’s research process. The digital tide has swept in new tools, habits, and expectations. Today’s students rarely line up at the reference desk. Instead, they consult their phones, generative AI bots, and smart search engines that promise answers in seconds. However, educators still need to teach students the ability to be critical consumers of information, whether produced by humans or generated by AI tools.

    Teachers haven’t stopped assigning projects on wolves, genetic engineering, drug abuse, or the Harlem Renaissance, but the way students approach those assignments has changed dramatically. They no longer just “surf the web.” Now, they engage with systems that summarize, synthesize, and even generate research responses in real time.

    In 1997, a keyword search might yield a quirky mix of werewolves, punk bands, and obscure town names alongside academic content. Today, a student may receive a paragraph-long summary, complete with citations, created by a generative AI tool trained on billions of documents. To an eighth grader, if the answer looks polished and is labeled “AI-generated,” it must be true. Students must be taught how AI can hallucinate or simply be wrong at times.

    This presents new challenges, and opportunities, for K-12 educators and librarians in helping students evaluate the validity, purpose, and ethics of the information they encounter. The stakes are higher. The tools are smarter. The educator’s role is more important than ever.

    Teaching the new core four

    To help students become critical consumers of information, educators must still emphasize four essential evaluative criteria, but these must now be framed in the context of AI-generated content and advanced search systems.

    1. The purpose of the information (and the algorithm behind it)

    Students must learn to question not just why a source was created, but why it was shown to them. Is the site, snippet, or AI summary trying to inform, sell, persuade, or entertain? Was it prioritized by an algorithm tuned for clicks or accuracy?

    A modern extension of this conversation includes:

    • Was the response written or summarized by a generative AI tool?
    • Was the site boosted due to paid promotion or engagement metrics?
    • Does the tool used (e.g., ChatGPT, Claude, Perplexity, or Google’s Gemini) cite sources, and can those be verified?

    Understanding both the purpose of the content and the function of the tool retrieving it is now a dual responsibility.

    2. The credibility of the author (and the credibility of the model)

    Students still need to ask: Who created this content? Are they an expert? Do they cite reliable sources? They must also ask:

    • Is this original content or AI-generated text?
    • If it’s from an AI, what sources was it trained on?
    • What biases may be embedded in the model itself?

    Today’s research often begins with a chatbot that cannot cite its sources or verify the truth of its outputs. That makes teaching students to trace information to original sources even more essential.

    3. The currency of the information (and its training data)

    Students still need to check when something was written or last updated. However, in the AI era, students must understand the cutoff dates of training datasets and whether search tools are connected to real-time information. For example:

    • ChatGPT’s free version (as of early 2025) may only contain information up to mid-2023.
    • A deep search tool might include academic preprints from 2024, but not peer-reviewed journal articles published yesterday.
    • Most tools do not include digitized historical data that is still in manuscript form. It is available in a digital format, but potentially not yet fully useful data.

    This time gap matters, especially for fast-changing topics like public health, technology, or current events.

    4. The wording and framing of results

    The title of a website or academic article still matters, but now we must attend to the framing of AI summaries and search result snippets. Are search terms being refined, biased, or manipulated by algorithms to match popular phrasing? Is an AI paraphrasing a source in a way that distorts its meaning? Students must be taught to:

    • Compare summaries to full texts
    • Use advanced search features to control for relevance
    • Recognize tone, bias, and framing in both AI-generated and human-authored materials

    Beyond the internet: Print, databases, and librarians still matter

    It is more tempting than ever to rely solely on the internet, or now, on an AI chatbot, for answers. Just as in 1997, the best sources are not always the fastest or easiest to use.

    Finding the capital of India on ChatGPT may feel efficient, but cross-checking it in an almanac or reliable encyclopedia reinforces source triangulation. Similarly, viewing a photo of the first atomic bomb on a curated database like the National Archives provides more reliable context than pulling it from a random search result. With deepfake photographs proliferating the internet, using a reputable image data base is essential, and students must be taught how and where to find such resources.

    Additionally, teachers can encourage students to seek balance by using:

    • Print sources
    • Subscription-based academic databases
    • Digital repositories curated by librarians
    • Expert-verified AI research assistants like Elicit or Consensus

    One effective strategy is the continued use of research pathfinders that list sources across multiple formats: books, journals, curated websites, and trusted AI tools. Encouraging assignments that require diverse sources and source types helps to build research resilience.

    Internet-only assignments: Still a trap

    Then as now, it’s unwise to require students to use only specific sources, or only generative AI, for research. A well-rounded approach promotes information gathering from all potentially useful and reliable sources, as well as information fluency.

    Students must be taught to move beyond the first AI response or web result, so they build the essential skills in:

    • Deep reading
    • Source evaluation
    • Contextual comparison
    • Critical synthesis

    Teachers should avoid giving assignments that limit students to a single source type, especially AI. Instead, they should prompt students to explain why they selected a particular source, how they verified its claims, and what alternative viewpoints they encountered.

    Ethical AI use and academic integrity

    Generative AI tools introduce powerful possibilities including significant reductions, as well as a new frontier of plagiarism and uncritical thinking. If a student submits a summary produced by ChatGPT without review or citation, have they truly learned anything? Do they even understand the content?

    To combat this, schools must:

    • Update academic integrity policies to address the use of generative AI including clear direction to students as to when and when not to use such tools.
    • Teach citation standards for AI-generated content
    • Encourage original analysis and synthesis, not just copying and pasting answers

    A responsible prompt might be: “Use a generative AI tool to locate sources, but summarize their arguments in your own words, and cite them directly.”

    In closing: The librarian’s role is more critical than ever

    Today’s information landscape is more complex and powerful than ever, but more prone to automation errors, biases, and superficiality. Students need more than access; they need guidance. That is where the school librarian, media specialist, and digitally literate teacher must collaborate to ensure students are fully prepared for our data-rich world.

    While the tools have evolved, from card catalogs to Google searches to AI copilots, the fundamental need remains to teach students to ask good questions, evaluate what they find, and think deeply about what they believe. Some things haven’t changed–just like in 1997, the best advice to conclude a lesson on research remains, “And if you need help, ask a librarian.”

    Steven M. Baule, Ed.D., Ph.D.
    Latest posts by Steven M. Baule, Ed.D., Ph.D. (see all)

    Source link

  • Director of Content and Product Strategy at UM

    Director of Content and Product Strategy at UM

    For my newest “Featured Gig” installment, I want to highlight the search for a director of content and product strategy at the Center for Academic Innovation at the University of Michigan. Sarah Dysart, chief learning officer at CAI, agreed to answer my questions about the role.

    If you have a job at the intersection of learning, organizational change and technology that you are recruiting for, please get in touch!

    Q: What is the university’s mandate behind this role? How does it help align with and advance the university’s strategic priorities?

    A: The University of Michigan has long staked its reputation on research excellence and public purpose. Now we’re doubling down on scale, access and impact—transforming how learning reaches people across every stage of life, across the globe. Life-changing education is one of four core impact areas within the University of Michigan’s Vision 2034, and the person in the director of content and product strategy role will support this strategic work.

    As Michigan accelerates its investment in digital learning, this person leads the charge: shaping and guiding a dynamic portfolio of educational products—online courses, certificates, degree programs, short-form learning experiences and beyond—that don’t merely mirror the classroom, but reimagine what learning can be. This role calls for both vision and precision, bringing together academic imagination, bold experimentation and the ability to turn ideas into action. The director will steer faculty ideas and institutional goals into cohesive, high-impact offerings that reflect the university’s boldest ambitions for learning at scale.

    Q: Where does the role sit within the university structure? How will the person in this role engage with other units and leaders across campus?

    A: This director role sits within the Center for Academic Innovation, operating at the intersection of ideas and implementation. The individual will collaborate closely with experts in learning design, media production, marketing, operations and research. But the real action is in the connections across campus.

    Michigan’s schools and colleges host a vast breadth and depth of faculty expertise, and this role thrives on cross-campus collaboration—partnering with academic unit leaders, faculty and staff to co-create offerings that extend U-M’s mission far beyond Ann Arbor. Drawing on insights about learner demand and market opportunity, the director will guide faculty in selecting content areas and product types with the greatest potential, translating an idea sketched on a whiteboard into a course reaching learners across the globe.

    Q: What would success look like in one year? Three years? Beyond?

    A: In one year, the new director has helped identify and launch a diverse set of online learning offerings that reflect Michigan’s distinctive strengths. Relationships are strong, internal workflows are humming and early results show promising reach and impact.

    In three years, the content portfolio resembles a greatest hits playlist for lifelong learners—diverse, well-balanced and deeply mission-aligned. It’s something learners want to come back and engage with, time and time again. Offerings address workforce needs, social challenges and global opportunity. Faculty are eager to collaborate. Partners are eager to invest.

    Beyond that, success means transformation. The University of Michigan is recognized not just for what it teaches, but for how it reimagines teaching. Our educational offerings reach far beyond campus, connecting with learners across industries, geographies and life stages. This individual has played a key part in turning a world-class university into a truly global learning institution.

    Q: What kinds of future roles would someone who took this position be prepared for?

    A: We’re looking for someone who wants to shape what’s next—not just for learners, but for institutions. The director of content and product strategy will develop a rare blend of skills: the ability to lead across academic and operational contexts, to translate vision into scalable experiences, and to steward innovation with both purpose and precision.

    From here, a person might go on to lead teaching and learning strategy at an institutional level, head up a center for innovation or lifelong learning, or take on an executive role at an organization working to expand access to education globally. Alternatively, one might pivot toward product leadership in mission-driven companies or foundations, applying their experience to broader systems change.

    This role builds expertise and a portfolio not just of educational content—but of influence, insight and lasting impact.

    Source link

  • Meta’s content moderation changes closely align with FIRE recommendations

    Meta’s content moderation changes closely align with FIRE recommendations

    On Tuesday, Meta* CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan announced sweeping changes to the content moderation policies at Meta (the owner of Facebook, Instagram, and Threads) with the stated intention of improving free speech and reducing “censorship” on its platforms. The changes simplify policies, replace the top-down fact-checking with a Community Notes-style system, reduce opportunities for false positives in automatic content flagging, and allow for greater user control of content feeds. All these changes mirror recommendations FIRE made in its May 2024 Report on Social Media.

    Given Meta’s platforms boast billions of users, the changes, if implemented, have major positive implications for free expression online.

    FIRE’s Social Media Report

    FIRE Report on Social Media 2024

    Reports

    With as many as 5.17 billion accounts worldwide, social media is the most powerful tool in history for average citizens to express themselves.


    Read More

    In our report, we promoted three principles to improve the state of free expression on social media:

    1. The law should require transparency whenever the government involves itself in social media moderation decisions.
    2. Content moderation policies should be transparent to users, who should be able to appeal moderation decisions that affect them.
    3. Content moderation decisions should be unbiased and should consistently apply the criteria that a platform’s terms of service establish.

    Principle 1 is the only one where FIRE believes government intervention is appropriate and constitutional (and we created a model bill to that effect). Principles 2 and 3 we hoped would enjoy voluntary adoption by social media platforms that wanted to promote freedom of expression. 

    While we don’t know whether these principles influenced Meta’s decision, we’re pleased the promised changes align very well with FIRE’s proposals for how a social media platform committed to free expression could put that commitment into practice.

    Meta’s changes to content moderation structures

    With a candid admission that it believes 10-20% of its millions of daily content removals are mistakes, Meta announced it is taking several actions to expand freedom of expression on the platform. The first is simplification and scaling back of its rules on the boundaries of discourse. According to Zuckerberg and Kaplan:

    [Meta is] getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented. 

    While this is promising in and of itself, it will be enhanced by a broad change to the automated systems for content moderation. Meta is restricting its automated flagging to only the most severe policy violations. For lesser policy violations, a user will have to manually report a post for review and possible removal. Additionally, any removal will require the agreement of multiple human reviewers.

    This is consistent with our argument that AI-driven and other automated flagging systems will invariably have issues with false-positives, making human review critical. Beyond removals, Meta is increasing the confidence threshold required for deboosting a post suspected of violating policy.

    Who fact-checks the fact checkers?

    Replacing top-down fact-checking with a bottom-up approach based on X’s Community Notes feature may be just about the biggest change announced by Meta. As FIRE noted in the Social Media Report: 

    Mark Zuckerberg famously said he didn’t want Facebook to be the “arbiter of truth.” But, in effect, through choosing a third-party fact checker, Facebook becomes the arbiter of the arbiter of truth. Given that users do not trust social media platforms, this is unlikely to engender trust in the accuracy of fact checks.

    Zuckerberg similarly said in the announcement that Meta’s“fact checkers have just been too politically biased, and have destroyed more trust than they’ve created.” 

    Our Social Media Report argued that the Community Notes feature is preferable to top-down fact-checking, because a community of diverse perspectives will likely be “less vulnerable to bias and easier for users to trust than top-down solutions that may reflect the biases of a much smaller number of stakeholders.” Additionally, we argued labeling is more supportive of free expression, being a “more speech” alternative to removal and deboosting.

    We are eager to see the results of this shift. At a minimum, experimentation and innovation in content moderation practices provides critical experience and data to guide future decisions and help platforms improve reliability, fairness, and responsiveness to users.

    User trust and the appearance of bias

    An overall theme in Zuckerberg and Kaplan’s remarks is that biased decision-making has eroded user trust in content moderation at Meta, and these policy changes are aimed at regaining users’ trust. As FIRE argued in our Social Media Report:

    In the case of moderating political speech, any platform that seeks to promote free expression should develop narrow, well-defined, and consistently enforceable rules to minimize the kind of subjectivity that leads to arbitrary and unfair enforcement practices that reduce users’ confidence both in platforms and in the state of free expression online.

    We also argued that perception of bias and flexibility in rules encourages powerful entities like government actors to “work the refs,” including through informal pressure, known as “jawboning.”

    What is jawboning? And does it violate the First Amendment?

    Issue Pages

    Indirect government censorship is still government censorship — and it must be stopped.


    Read More

    Additionally, when perceived bias drives users to small, ideologically homogeneous alternative platforms, the result can damage broader discourse:

    If users believe their “side” is censored unfairly, many will leave that platform for one where they believe they’ll have more of a fair shake. Because the exodus is ideological in nature, it will drive banned users to new platforms where they are exposed to fewer competing ideas, leading to “group polarization,” the well-documented phenomenon that like-minded groups become more extreme over time. Structures on all social media platforms contribute to polarization, but the homogeneity of alternative platforms turbocharges it.

    These are real problems, and it is not clear whether Meta’s plans will succeed in addressing them, but it is welcome to see them recognized.

    International threats to speech

    Our Social Media Report expressed concern that the Digital Services Act — the broad EU regulation mandating censorship on social media far beyond what U.S. constitutional law allows — would become a least common denominator approach for social media companies, even in the United States. Mark Zuckerberg seems to announce his intention to do no such thing, stating he planned to work with President Trump to push back on “governments around the world” that are “pushing [companies] to censor more.”

    While we are pleased at the implication that Meta’s platforms will seemingly not change their free expression policies in America at the behest of the EU, the invocation of a social media company working with any government, including the United States government, rings alarm bells for any civil libertarian. We will watch this development closely for that reason. 

    FIRE has often said — and it often bears repeating — the greatest threat to freedom of expression will always come from the government, and as Zuckerberg himself notes, the government has in years past pushed Meta to remove content.

    When the rubber meets the road

    Meta’s commitment to promote freedom of expression on its platforms offers plenty of reasons for cautious optimism. 

    But we do want to emphasize caution. There is, with free expression, often a large gap between stated intentions and what happens when theory meets practice. As a civil liberties watchdog, our duty is to measure promise against performance.

    Take, for example, our measured praise for Elon Musk’s stated commitment to free expression, followed by our frequent criticism when he failed to live up to that commitment. And that criticism hasn’t kept us from giving credit when due to X, such as when it adopted Community Notes. 

    Similarly, FIRE stands ready to help Meta live up to its stated commitments to free expression. You can be sure that we will watch closely and hold them accountable.

    * Meta has donated to FIRE.

    Source link

  • Read and Listen to Inspiring CUPA-HR Content From 2021 – CUPA-HR

    Read and Listen to Inspiring CUPA-HR Content From 2021 – CUPA-HR

    by CUPA-HR | January 5, 2022

    Throughout 2021, HR practitioners have proven their resilience time and again by positively impacting higher education not only in response to the ever-evolving pandemic, but also in building more flexible, diverse and inclusive workplaces. CUPA-HR captured many of these higher ed success stories, as well as leadership advice, helpful resources and workforce data trends in the following articles, podcasts and blog posts.

    As you read and listen to the inspiring work your HR colleagues are doing at colleges and universities around the country, we encourage you to jot down ideas to take into the year ahead: 

    Retention and Engagement 

    Develop to Retain: Tools and Resources for Higher Ed Professional Development (The Higher Ed Workplace Blog)

    Maintaining Culture and Connection for Remote Employees (The Higher Ed Workplace Blog)

    Stay tuned for an article in the upcoming winter issue of Higher Ed HR Magazine: “Four Areas HR Can Address Now to Boost Retention and Engagement.”

    Future of Work 

    New Report Highlights Changes to the Professional Workforce in the Wake of the Pandemic (The Higher Ed Workplace Blog)

    New Report Highlights Changes to Faculty Workforce in the Wake of the Pandemic (The Higher Ed Workplace Blog)

    Navigating Compliance With a Multi-State Workforce (The Higher Ed Workplace Blog)

    Determining Remote Work Eligibility and Talking to Leadership About Flexible Work (CUPA-HR Soundbite)

    Diversity, Equity, and Inclusion 

    5 CHROs Use CUPA-HR’s DEI Maturity Index to Energize Their DEI Efforts (The Higher Ed Workplace Blog)

    A Mission for Greater Faculty Diversity — Oakland University’s Diversity Advocate Program (Higher Ed HR Magazine)

    Can HR Investigators Be Anti-Racist? — Action Steps to Overcome Racial Bias When Conducting Workplace Investigations (Higher Ed HR Magazine)

    Juneteenth — How Will Your Institution Observe the Day? (The Higher Ed Workplace Blog)

    Supporting the LGBTQ+ Community in Higher Ed — 3 Learning Resources for HR (The Higher Ed Workplace Blog)

    Three Ways HR Can Promote Cultural Appreciation Over Appropriation (The Higher Ed Workplace Blog)

    Boost Your Pay Equity Know-How By Tapping Into These Resources (The Higher Ed Workplace Blog)

    Mental Health

    Mental Health Month Focus: Resources (The Higher Ed Workplace Blog)

    Strategies to Become More Resilient in Work and Life (The Higher Ed Workplace Blog)

    HR Care Package — Resources for Self-Care (The Higher Ed Workplace Blog)

    HR Leadership 

    CUPA-HR Conversations: Higher Ed HR Turns 75 (CUPA-HR Podcast)

    Why Psychological Safety Matters Now More Than Ever (Higher Ed HR Magazine)

    Opening Doors for Strategic Partnerships With Academic Leadership (Higher Ed HR Magazine)



    Source link