Blog

  • Year in Review: Updating Your Strategy to Grow Smarter

    Year in Review: Updating Your Strategy to Grow Smarter

    Why Higher Ed Institutions Should Use Annual Reviews to Refocus Their Strategy

    Organizational development principles teach us that long-term, sustainable change is achieved through making data-driven decisions, continuously learning and adjusting, and intentionally planning for growth. Which is why an annual review of your existing higher education programs and operations is an essential step in the process for building your institution’s success. 

    Skipping a review in the planning process puts your institution at risk of repeating mistakes and missing crucial opportunities due to an outdated approach. 

    In every area and at every level, institutions benefit from looking back at data from the previous year for clues into what adjustments are needed for improvement. From holding project retrospectives and conducting marketing audits to tracking enrollment trends and having individual performance check-ins, regular rituals that facilitate reflection on what’s working and what isn’t encourage continuous growth. 

    Similarly, creating an annual review process for your degree programs empowers your institution to assess its efficiency while using data-driven insights to build a strategic road map for the future.

    Change starts from within. The best way to achieve growth is by being honest about where you are now and where you want to be. At Archer, we go through a robust discovery process with our partner institutions to understand the unique challenges and opportunities their online programs and the institutions themselves face in today’s dynamic higher ed landscape.

    Common Pitfalls in Strategy Resets

    Strategy resets often simply recycle old plans instead of applying lessons learned. Contrary to what you may think, plans built on evidence from the past are much easier to implement and keep on track than recycled plans. Here are three common pitfalls your institution should avoid when planning for the future.

    Pitfall #1: Following Assumptions Over Evidence

    When leaders fail to connect data and insights from past performance to future goals, important trends are missed and errors are doomed to be repeated. Analyzing available metrics and assessing risks ultimately leads to more intelligent plans that can increase enrollment and support positive student outcomes. Marrying your intuition and insights with data makes for a stronger strategy. 

    Pitfall #2: Allowing Siloed Departments to Slow Progress

    Honest assessments across functional areas only happen when teams work together. Every corner of the university should be represented in the review process; marketing, information technology, enrollment, financial aid, the registrar, faculty, administration, and leadership all need to be aligned. 

    This is hard work, but you can start by finding ways to collaborate with a department that you work with regularly and then expand that collaboration to other departments. Look for more opportunities throughout the year for better cross-departmental communication and collaboration. 

    Pitfall #3: Limiting Plans to the Near Future Without Considering the Bigger Picture

    Institutions should prevent letting their short-term tactics override their long-term goals. When you tie your institutional goals to your departmental goals, you create a natural flow of work and have an easier time communicating your successes up the chain of command. By regularly reviewing their goals and assessing if their work is on track or needs revisiting, teams are able to course-correct when necessary — before reaching the end of the year.

    Using Postmortem Frameworks for Smarter Growth

    Fortunately, there are plenty of techniques that can be used to create an effective review process. Postmortem meetings, data analysis, planning worksheets, and open communication can fuel an insightful retrospective. 

    When approached with the intent to learn from the past and find areas for improvement, a postmortem meeting offers a crucial opportunity for an organization to reflect on its progress. In postmortem meetings, individuals and teams consider how successful a project or period of time was and pinpoint what to change moving forward. Risk assessments are also particularly useful to help teams prepare for possible enrollment and market shifts in the future. 

    By harnessing the power of analytics reporting and postmortem agreements, teams can co-create realistic road maps that connect their vision with the institution’s operational capacity. Getting buy-in from all departments by engaging them in cooperative planning gives everyone the chance to discuss their team’s areas of strength and the areas where additional support may be needed. 

    Institutions that follow Archer’s Good, Better, Best framework are able to get a clear view of where they currently stand and what they should prioritize next to achieve consistent growth.

    Key Takeaways

    • Avoid common strategy reset pitfalls by first taking account of where you are now and then determining where you want to be.
    • By leveraging data, collaboration, and iterative improvement strategies, institutions use proven organizational development techniques to stay competitive.
    • Postmortems, planning tools, and governance help leaders sustain their institution’s progress. 

    Let Archer Support Your Year-in-Review Process

    Institutional growth requires a tailored approach, and the path looks different for every organization. At Archer Education, we understand that deep discovery, organizational development, sufficient investment, best-in-class technology, and a laser focus on the student experience are essential.

    Are you ready to expand your institution’s online program offerings, elevate your student enrollments, future-proof your teams, or all of the above? Then allow the talented Archer team to support your institution by helping you establish a year-in-review process and uncover new possibilities for sustainable growth. 

    If you’d like to learn more, contact our team and explore our technology-enabled strategy marketing, enrollment, and retention services today.

    Source link

  • FIRE answers your questions | The Foundation for Individual Rights and Expression

    FIRE answers your questions | The Foundation for Individual Rights and Expression

    Changes at the Pentagon, Charlie Kirk and cancel
    culture, free speech and misinformation, globalized censorship,
    Indiana University, how to support FIRE, and more!

    Timestamps:

    00:00 Introductions

    02:11 What is the Press Clause, and who does it apply
    to?

    05:53 FIRE’s position on Oklahoma student grading
    incident

    08:50 What does FIRE need from Members besides
    financial support?

    15:59 FIRE’s
    College Free Speech Rankings
    and what they mean

    19:44 What is the latest on the
    Ann Seltzer cases
    ?

    22:08 What is FIRE’s view on the
    Pentagon press room changes
    ?

    24:50 What is the value of small donations? How can
    FIRE supporters volunteer?

    29:21 Indiana University is good at football but

    bad at free speech

    33:46 Are courts trending in a more speech-protective
    direction?

    37:05 Charlie Kirk and cancel culture

    39:20 Pro- and anti-Zionist speech and “hostile
    environment” harassment

    43:48 Is “globalize the intifada” incitement?

    45:07 How does FIRE distinguish between free speech
    and misinformation?

    47:54 Can FIRE help supporters start free speech alumni
    groups
    ?

    48:55 Free speech, artificial intelligence, and
    copyright/trademarks

    51:51 The sordid legacy of
    Hazelwood v. Kuhlmeier

    53:22 Staying hopeful amidst so much hypocrisy

    55:32 Global speech platforms and censorship

    58:14 Differences between FIRE and the ACLU?

    59:34 Does FIRE have a Substack? (The Eternally Radical
    Idea
    , So to
    Speak
    , Expression)

    1:00:03 Closing remarks.

    Joining us:

    • Alisha Glennon, chief operating
      officer

    • Nico Perrino, executive vice
      president

    • Greg Lukianoff, president and
      ceo

    • Will Creeley, legal director

    Become a paid subscriber today to receive invitations
    to future live webinars.

    If you became a FIRE Member through a donation to FIRE
    at thefire.org and would
    like access to Substack’s paid subscriber podcast feed, please
    email [email protected].

    Source link

  • LAWSUIT: Tennessee state employee sues after unlawful firing for Charlie Kirk post

    LAWSUIT: Tennessee state employee sues after unlawful firing for Charlie Kirk post

    • Monica Meeks is a combat veteran and lifelong public servant fired for criticizing Charlie Kirk from her personal Facebook shortly after his assassination.
    • Under the First Amendment, public employers can’t fire people simply because the government doesn’t approve of their off-duty speech.
    • FIRE is suing the Tennessee Commissioner of Commerce and Insurance on Monica’s behalf, seeking reinstatement and damages.

    NASHVILLE, Dec. 10, 2025 — The Foundation for Individual Rights and Expression filed a federal lawsuit today on behalf of Monica Meeks, a Tennessee public employee unlawfully fired from her state government job solely for criticizing Charlie Kirk in a Facebook comment after his assassination.

    “Our democracy suffers when public employees fear to voice what they are free to think,” said FIRE senior attorney Greg Greubel. “There are more than 23 million government employees across the country — and they can’t be fired simply because their boss or folks online don’t like the opinions they share off the clock.”

    After serving 20 years in the U.S. Army, including a tour of duty in Iraq, Monica joined the Tennessee Department of Commerce and Insurance in 2016. Since joining the department, Monica has received stellar performance reviews and regular raises.

    “I’ve never backed down from a fight in my life, and I don’t plan to start now,” said Monica. “I took an oath to defend the Constitution. Now, it’s time to stand up for it again.”

    COURTESY PHOTOS OF MONICA FOR MEDIA

    In her private life, Monica is politically engaged and even ran for the Tennessee House of Representatives in 2022 as an independent candidate. In her free time, she enjoys joking around and trading hot takes with her old Army “battle buddies” on Facebook. After the assassination of conservative activist Charlie Kirk, Monica responded to a friend’s post about Kirk with the remark, “The way you tap dance for White Supremacist should be studied!”

    Monica’s post was never intended to go further than two friends amiably sparring over politics — as millions of Americans do every day. But the post escaped her personal circle, and she quickly became swept up in the wave of cancellation attempts that followed the Kirk assassination.

    Only 15 or so X accounts called for Monica to be fired in response to an unrelated post by the Department on the afternoon of September 12. That includes comments marked as “probable spam,” and posts from anonymous accounts like “Bonerville Asskicker” and “NonGMOKaren.” But Tennessee Department of Commerce and Insurance Commissioner Carter Lawrence publicly announced her firing mere hours later, and sent a termination letter to Monica’s inbox. Lawrence’s letter mentioned no other performance issues whatsoever, nor any disruption to department operations, and made clear he was firing Monica solely for her lone “inflammatory and insulting comment” on Facebook.

    “You may disagree with Monica’s take on Charlie Kirk. But letting a few angry individuals get a public employee fired for off-the-clock speech, even when it has no impact on the workplace, will inevitably boomerang back on people with views you do support,” said FIRE staff attorney Cary Davis. “When public employees are forced into silence for fear of offending someone on the internet, we all lose.”

    Lawrence’s rush to fire Monica violated Supreme Court precedent, which established a three-prong test to determine when a government employee’s speech is constitutionally protected and cannot be punished by the state. First, the employee must speak “as a citizen” rather than as an employee. Second, the speech must involve “a matter of public concern.” Third, the employee’s interest in exercising their right to free expression must outweigh the state’s interest in ensuring effective government operations.

    Monica’s post easily clears all three hurdles:

    1. Monica clearly went to great lengths to establish that she was speaking as a private citizen. Her Facebook had a disclaimer that her views were hers and hers alone, and her profile didn’t even mention that she worked for the department.
    2. Monica’s post obviously involved a matter of public concern. The fact that others might vehemently disagree with her view of Kirk doesn’t change the fact that it was a major news story with political reverberations across the country.
    3. There is no evidence Monica’s post had any disruptive effect on the department or her work for it. Lawrence’s letter cited complaints about the post by members of the public, but there’s no evidence any coworkers complained, or that her opinions on Kirk would in any way impede her ability to investigate financial services fraud. It was hostility to Monica’s politics that drove the decision — not any legitimate government concern.

    FIRE is asking the U.S. District Court for the Middle District of Tennessee to find that Lawrence retaliated against Monica for exercising her clearly established First Amendment rights, and to award her damages and reinstate her to her position. And because Lawrence clearly disregarded her constitutional rights, FIRE is also seeking punitive damages for Monica. Melody Fowler-Green of Yezbak Law Offices is serving as local counsel in the case. 


    The Foundation for Individual Rights and Expression (FIRE) is a nonpartisan, nonprofit organization dedicated to defending and sustaining the individual rights of all Americans to free speech and free thought — the most essential qualities of liberty. FIRE educates Americans about the importance of these inalienable rights, promotes a culture of respect for these rights, and provides the means to preserve them.

    CONTACT:

    Alex Griswold, Communications Campaign Manager, FIRE: 215-717-3473; [email protected]

    Source link

  • Everything you need to know about REF 2029

    Everything you need to know about REF 2029

    REF 2029 has been unpaused and with it will undoubtedly come a whole new wave of disagreement and debate. Much like the research ecosystem itself it is an unpredictable beast forever buffeted by its participants, leaders, and funders.

    To get immediately to the headlines. People, Culture, and Environment (PCE) has been relabelled as Strategy, People and Research Environment (SRPE). The weighting for the new element is 20 per cent of the total dropped from the 25 per cent weighting originally allocated to PCE. Contribution to Knowledge and Understanding (CKU) (the output one), has been boosted to 55 per cent, and Engagement and Impact (E&I) (the impact case study one) has remained at 25 per cent.

    There is a significant attempt to reduce the burden of the exercise through reverting to some of the narrative practices of REF 2021 in research environments while reducing the need for new data to be collected. E&I has remained pretty much the same and there are some concessions on portability that will only partially assuage the concerns of people concerned by this sort of thing.

    So: a bit more for the things that researchers produce and a bit less for how they produce them.

    Strategy people research and environment

    The big frame for REF 2029 has been that research is a team sport. This is why Research England and its devolved counterparts have sought to remove the relationship between researchers and research outputs. However, this led to a forever debate on who actually produces research between institutions or researches and whose work should be measured. In effect, should an average researcher be boosted by an exceptional research environment or should an exceptional researcher be held back by an average research environment.

    SPRE asks institutions and units to demonstrate how their strategies contribute toward the development of people and good research environments. This will be done primarily through narrative with metrics to support. There is flexibility in how providers may demonstrate their work in this area but the core idea is that the work should be accompanied by a clear strategic intent.

    The actual basket of work that can fit under SPRE is varied and might include evidence of improving research cultures, new partnerships, collaborations, the development of new policies, and a range of evidence of improving culture metrics. The major change from what had been proposed is the underpinning focus on strategy and by extension the broader range of activity providers are likely to submit. Culture is still very much there but it is part of a range of activity.

    SPRE will be assessed at both an institution and unit level. The assessment will be through a statement similar to the unit level statements from the environment element in the 2021 exercise. However, the institution level score will make up 60 per cent of the overall score for each Unit of Assessment (UoA) and documentation linked to the UoA itself will make up the remaining 40 per cent of the overall score. In effect, this means that the research infrastructure of the institution will have a greater impact than the research infrastructure of the unit where research is actually produced.

    The changes to SPRE have partially emerged from the PCE pilots. Their conclusion was that it would be possible to assess PCE, but that the approach would need some adaptations for a full scale exercise. Some of the challenges included: the phenomenon of larger institutions scoring better purely because they had access to more evidence, the need for simple and timely data collection, and a need for clearer guidance and simpler processes. In short, it is technically possible to measure PCE in a robust way but it is hard to implement – which was a view shared by many at the start of the exercise.

    Measures

    The argument in favour of the 60:40 split is that it incentivises providers to improve their research environments across the whole institution. In what will be partially good news to the minister there is also a renewed focus on rewarding providers that are focussed on aligning their activities with their strategic intent in people, research, and environments.

    While we do not yet have all of the criteria, the submission burden seems to be lower than many feared. As well as the statements at a unit and institution level there will be a data requirement at an institutional level which it is anticipated may include: which units are submitted, volume, research doctoral degrees awards, and annual research income by source.

    At a unit level there are similar set of measures with some nuance. In the initial set of decisions it was proposed then PCE now SPRE could include

    […]EDI data (that are already collected via the HESA staff record), quantitative or qualitative information on the career progression and paths of current and former research staff, outcomes of staff surveys, data around open research practices, and qualitative information on approaches to improve research robustness and reproducibility.

    There are criteria yet to be published but it is suggested that issues of equality will be looked at primarily through the statements, and through calibration with the People and Diversity Advisory Panel and the Research Diversity Advisory Panel during the panel assessment stage. The data burden will be less and ideally not newly collected.

    CKU OK

    SPRE will also now be the place where institutions submit context, structure, and strategy, about their units. Disciplinary statements have been removed entirely from Contribution to Knowledge and Understanding (CKU) and Engagement and Impact (E&I). This might look like a rearranging of the same information but it also impacts the overall weightings.

    In the previous model CKU accounted for 50 per cent overall including outputs and the statement. In effect, CKU now accounts for 55 per cent of the weighting while focusing only on outputs.

    REF is now an exercise which is still majority related to the perceived quality of research outputs. There is now an upper limit on individual submissions of five per unit unless there is an explanation why it exceeds this. However, there is no requirement that every researcher submits (the decoupling process). Providers will have to produce a statement on how their submissions are representative through and each unit will be expected to provide an overview of their work and a statement of representation.

    On the other big debate the portability rules have remained broadly the same. To recap, in REF 2014 the whole output was captured by whichever institution a researcher was at, at the REF census date. In REF 2021 if a researcher moved between institutions the output was captured by both. In REF 2029 the initial proposal was that the output will be captured by the institution where there is a “substantive link.” Research England has made a slight concession and will allow long-form submissions to be portable for a five-year period with sufficient justification.

    What remains unresolved

    There is a political element to all of this, of course. In the post-16 white paper it was made explicit that

    We anticipate that institutions will be recognised and rewarded, including through the Research Excellence Framework (REF) and Quality Related funding, for demonstrating clarity of purpose, demonstrating alignment with government priorities, and for measurable impact, where appropriate. While government will continue to invest across the full spectrum of research, we expect universities to be explicit about their contributions and to use this framework to guide strategic decisions.

    REF 2029, as currently set out, does not do this. Unless there are further announcements on the relationship between REF and funding, REF will do a different version of what it has always done. It assesses the research that is put in front of it. There is no additional weighting for alignment to government priorities, there are no changes to impact measurements, and while there is a focus on alignment between activity and strategic intent it is up for institutions to define what that strategic intent is. There have been efforts to reduce the burden from the initial decisions but this does not seem to be a significantly less burdensome activity than REF 2021.

    The minister might be pleased that the word “strategy” has replaced “culture”, and with some fiddling with weighting, but the direction of travel across the whole exercise has remained broadly intact. It is not quite the cultural revolution that was promised nor is it an output focussed exercise that some wanted. It’s a bit of a compromise but only a little bit.

    What now

    The response of the sector will largely determine whether these changes are viewed as a success. Ultimately, REF is a political project. It is not simply an input into the dispassionate allocation of public money but requires decisions on what is valued. There is a version of the REF which is only about research outputs. There is a possible version which is only about research environments and there are hundreds of weighting, criteria, frameworks, rules, and regulations in between.

    The reasonable criticism of Research England is that it made radical changes to REF 2021 and could not bring the sector with it for REF 2029. At times, it felt like the public explanation was about how a series of technical changes to the exercise achieved a set of good outcomes for the sector without vigorously explaining what good was, who would lose out, and why the trade offs were worth it.

    These new decisions are either a messy middle ground or a genius compromise. They cede ground to those concerned about outputs by changing weightings and moving criteria but it maintains culture as a key focus. They provide room to include more culture focussed statements without complex metrics. And they are politically astute enough to talk about strategy, even if the strategy isn’t the same as the government’s in every institution.

    The worst possible result would be the ongoing argument between providers, between providers and funders, and between funders and government. The unedifying spectacle of a noisy debate on why elements of the sector’s own research exercise is not fit for purpose distracts from both the enormous administrative burden of the exercise and the political case for why the sector should command significant research funding.

    Source link

  • Re-thinking research support for English universities: Research England’s programme of work during the REF 2029 pause

    Re-thinking research support for English universities: Research England’s programme of work during the REF 2029 pause

    In September, Science Minister Lord Vallance announced a pause to developing REF 2029 to allow REF and the funding bodies to take stock. Today, REF 2029 work resumes with a refreshed focus to support a UK research system that delivers knowledge and innovation with impact, improving lives and creating growth across the country.

    Research England has undertaken a parallel programme of work during the pause, intended to deliver outcomes that align with Government’s priorities and vision for higher education as outlined in the recently published Post-16 Education and Skills white paper. Calling this a pause doesn’t reflect the complexity, pace and challenge faced in delivering the programme over the last three months.

    Since September, we have:

    • explored the option of baseline performance in research culture being a condition of funding
    • considered how our funding allocation mechanisms in England could be modified to better reward quality, as part of our ongoing review of Strategic Institutional Research Funding (SIRF)
    • fast-tracked existing activity related to the allocation of mainstream quality-related research funding (QR).
    • developed our plans to consider the future of research assessment.

    Over the last three months to progress this work, we’ve engaged thoughtfully with groups across the English higher education and research sector, as well as with the devolved funding bodies, to help us understand the wider context and refine our approaches. Let me outline where we’ve got to – and where we’re going next – with the work we’ve been doing.

    Setting a baseline for research cultures

    Each university, department and team are unique. They have their own values, priorities and ways of working. I therefore like to think of ‘research cultures and environments’, using the term in plural, to reflect this diversity. The report of the REF People, Culture and Environment pilot, also published today, confirms that there is excellent practice in this area across the higher education sector. REF 2029 offers an opportunity to recognise and reward those institutions and units that are creating the open, inclusive and collaborative environments that enable excellent research and researchers to thrive.

    At the same time, we think there are some minimum standards that should be expected of all providers in receipt of public funding. To promote these standards, we will be strengthening the terms and conditions of Research England funding related to research culture. In the first instance, this will mean a shift from expecting certain standards to be met, to requiring institutions to meet them.

    We are very conscious not to increase burden on the sector or create unnecessary bureaucracy. This will only succeed by engaging closely with the sector to understand how this can work effectively in practice. To this end, we will be engaging with groups in early 2026 to establish rigorous standards that are relevant across the diversity of English institutions. As far as possible, we will use existing reporting mechanisms such as the annual assurance report provided by signatories to the Research Integrity Concordat. While meeting the conditions will not be optional, we will support institutions that don’t yet meet all the requirements, working together and utilising additional reporting to help with and monitor improvements. And because research cultures aren’t static, we will evolve our conditions over time to reflect changes in the sector.

    This will lead to sector-wide improvements that we can all get behind:

    • support for everyone who contributes to excellent and impactful research: researchers, technicians and others in vital research-enabling roles, across all career stages
    • ensuring research in England continues to be done with integrity
    • ensuring that is also done openly
    • strengthening responsible research assessment.

    Our next steps are to engage with the sector and relevant groups as part of the process of making changes to our terms and conditions of funding, and to establish low-burden assurance mechanisms. For example, working as part of the Researcher Development Concordat Strategy Group, we will collectively streamline and strengthen the concordat, making it easier for institutions to implement this important cross-sectoral agreement.

    These changes will complement the assessment of excellent research environments in the REF and the inspiring practice we see across the sector. Championing vibrant research cultures and environments is a mission that transcends the REF — it’s the foundation for maintaining and enhancing the UK’s world-leading research, and we will continue to work with the devolved funding bodies to fulfil the mission.

    Modelling funding mechanisms

    The formula-based, flexible research funding Research England distributes to English universities is crucial to underpinning the HE research landscape, and supporting the

    financial sustainability of the sector. We are aware that that this funding is increasingly being spread more thinly.

    As part of the review of strategic institutional research funding (SIRF), we are working to understand the wider effectiveness of our funding approaches and consider alternative allocation mechanisms. Work on this review is continuing at speed. We will provide an update to the sector next year on progress, as well as the publication of the independent evaluation of SIRF, anticipated in early 2026.

    Building on this, we have been considering how our existing mechanisms in England could be modified to better reward quality of research. This work looks at how different strands of SIRF – from mainstream QR to specialist provider funding – overlap, and how that affects university finances across English regions and across institution types. We are continuing to explore options for refining our mainstream QR formula and considering the consequences of those different options. This is a complex piece of work, requiring greater time and attention, and we expect next year to be a key period of engagement with the sector.

    The journey ahead

    While it may seem early to start thinking about assessment after REF 2029, approaches to research assessment are evolving rapidly and it is important that we are able to embrace the opportunities offered by new technologies and data sources when the moment comes. We have heard loud and clear that early clarity on guidance reduces burden for institutions and we want to be ready to offer that clarity. A programme of work that maximises the opportunity offered by REF 2029 to shape the foundation for future frameworks will be commencing in spring 2026.

    Another priority will be to consider how Research England as the funding body for England, and as part of UKRI, can support the government’s aim to encourage a greater focus on areas of strength in the English higher education sector, drawing on the excellence within all our institutions. As I said at the ARMA conference earlier in the year, there is a real opportunity for universities to identify and focus on the unique contributions they make in research.

    The end of the year will provide the sector (and my colleagues in Research England and the REF teams) with some much-needed rest. January 2026 will see us pick back up a reinvigorated SIRF review, informed by the REF pause activity. We will continue to refine our research funding and policy to – as UKRI’s new mission so deftly puts it – advance knowledge, improve lives and drive growth.

    Source link

  • Pell Grant program faces up to $11B annual budget shortfall

    Pell Grant program faces up to $11B annual budget shortfall

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief: 

    • The Pell Grant program faces a 10-year shortfall of up to $97 billion, with the recent expansion to include short-term workforce programs adding to existing structural funding problems, according to a Friday analysis from the nonprofit Committee for a Responsible Federal Budget. 
    • The massive spending package Republicans passed this summer, called the One Big Beautiful Bill Act, gave the Pell Grant program $10.5 billion in one-time funding to avoid a looming budget shortfall. However, this will only delay the shortfall, according to analysts.
    • CRFB expects the Pell Grant program’s costs to exceed its funding by $6 billion to $11 billion each year over the next decade. “The underlying structural gap between costs and appropriations remains unaddressed, and in fact was made worse under OBBBA,” the analysts said.

    Dive Insight:  

    Before Republicans passed their spending package, the Pell Grant program was expected to deplete its reserves by the 2025 fiscal year. With the $10.5 billion infusion, lawmakers staved off that crisis — but only by about two years, according to CRFB’s analysis. 

    That’s in part because the legislative package also expands Pell Grant funding to programs as short as eight weeks, starting in July 2026. CRFB pointed to Congressional Budget Office data estimating that the expansion, known as Workforce Pell, will add about $2 billion to the program’s costs over the next decade. 

    But authors of Friday’s analysis expect this number to be much higher —  $6 billion or more — depending on how many students apply for Workforce Pell, how states and institutions carry out the program, and how the U.S. Department of Education interprets and enforces the accountability measures established by Congress. 

    “History suggests that when new eligibility is created, enrollment often exceeds initial projections,” analysts said, citing a 2020 report on proposals at the time for short-term Pell from New America, a left-leaning think tank.

    In 2008, lawmakers expanded Pell Grants to be available year-round. At the time, the CBO estimated the program would cost $2.6 billion over the next five years. But in 2011, a U.S. Education Department official testified before Congress that the program expansion was costing 10 times higher annually than expected. 

    Similarly, in 2005, Congress lifted restrictions on federal student aid flowing to fully online colleges. While the Education Department expected the change to cost $697 million over 10 years, online-only colleges received “billions in federal aid dollars” in the 2018-19 award year alone, New America found. 

    In Friday’s analysis, researchers estimated the Pell Grant program would face a $61 billion 10-year shortfall if lawmakers keep its appropriations adjusted for inflation and maintain the maximum award of $7,395. If lawmakers keep both appropriations and the maximum award flat, that shortfall would reach $88 billion. 

    Moreover, the shortfall would hit $97 billion if lawmakers raise Pell Grant funding and the maximum award in line with inflation and Workforce Pell enrollment outpaces expectations, the researchers estimated. 

    The Education Department is meeting this week with selected students, employers, college officials and other stakeholders in a process known as negotiated rulemaking to work out regulations for implementing the new program. Under the 2025 statute, short-term programs must have a 70% job placement rate and a 70% graduation rate to be eligible for Pell Grants. 

    In a draft of regulatory language released last week, the Education Department proposed that, for the first couple years of the program, job placements would count regardless of what fields students enter. However, after the 2027-28 award year, programs would have to show that at least 70% of their students land jobs specifically in fields for which they were being trained.

    Source link

  • How AI can fix PD for teachers

    How AI can fix PD for teachers

    Key points:

    The PD problem we know too well: A flustered woman bursts into the room, late and disoriented. She’s carrying a shawl and a laptop she doesn’t know how to use. She refers to herself as a literacy expert named Linda, but within minutes she’s asking teachers to “dance for literacy,” assigning “elbow partners,” and insisting the district already has workbooks no one’s ever seen (awalmartparkinglott, 2025). It’s chaotic. It’s exaggerated. And it’s painfully familiar.

    This viral satire, originally posted on Instagram and TikTok, resonates with educators not because it’s absurd but because it mirrors the worst of professional development. Many teachers have experienced PD sessions that are disorganized, disconnected from practice, or delivered by outsiders who misunderstand the local context.

    The implementation gap

    Despite decades of research on what makes professional development effective–including a focus on content, active learning, and sustained support (Darling-Hammond et al., 2017; Joseph, 2024)–too many sessions remain generic, compliance-driven, or disconnected from day-to-day teaching realities. Instructional coaching is powerful but costly (Kraft et al., 2018), and while collaborative learning communities show promise, they are difficult to maintain over time.

    Often, the challenge is not the quality of the ideas but the systems needed to carry them forward. Leaders struggle to design relevant experiences that sustain momentum, and teachers return to classrooms without clear supports for application or follow-through. For all the time and money invested in PD, the implementation gap remains wide.

    The AI opportunity

    Artificial intelligence is not a replacement for thoughtful design or skilled facilitation, but it can strengthen how we plan, deliver, and sustain professional learning. From customizing agendas and differentiating materials to scaling coaching and mapping long-term growth, AI offers concrete ways to make PD more responsive and effective (Sahota, 2024; Adams & Middleton, 2024; Tan et al., 2025).

    The most promising applications do not attempt one-size-fits-all fixes, but instead address persistent challenges piece by piece, enabling educators to lead smarter and more strategically.

    Reducing clerical load of PD planning

    Before any PD session begins, there is a quiet mountain of invisible work: drafting the description, objectives, and agenda; building slide decks; designing handouts; creating flyers; aligning materials to standards; and managing time, space, and roles. For many school leaders, this clerical load consumes hours, leaving little room for designing rich learning experiences.

    AI-powered platforms can generate foundational materials in minutes. A simple prompt can produce a standards-aligned agenda, transform text into a slide deck, or create a branded flyer. Tools like Gamma and Canva streamline visual design, while bots such as the PD Workshop Planner or CK-12’s PD Session Designer tailor agendas to grade levels or instructional goals.

    By shifting these repetitive tasks to automation, leaders free more time for content design, strategic alignment, and participant engagement. AI does not just save time–it restores it, enabling leaders to focus on thoughtful, human-centered professional learning.

    Scaling coaching and sustained practice

    Instructional coaching is impactful but expensive and time-intensive, limiting access for many teachers. Too often, PD is delivered without meaningful follow-up, and sustained impact is rarely evident.

    AI can help extend the reach of coaching by aligning supports with district improvement plans, teacher and student data, or staff self-assessments. Subscription-based tools like Edthena’s AI Coach provide asynchronous, video-based feedback, allowing teachers to upload lesson recordings and receive targeted suggestions over time (Edthena, 2025). Project Café (Adams & Middleton, 2024) uses generative AI to analyze classroom videos and offer timely, data-driven feedback on instructional practices.

    AI-driven simulations, virtual classrooms, and annotated student work samples (Annenberg Institute, 2024) offer scalable opportunities for teachers to practice classroom management, refine feedback strategies, and calibrate rubrics. Custom AI-powered chatbots can facilitate virtual PLCs, connecting educators to co-plan and share ideas.

    A recent study introduced Novobo, an AI “mentee” that teachers train together using gestures and voice; by teaching the AI, teachers externalized and reflected on tacit skills, strengthening peer collaboration (Jiang et al., 2025). These innovations do not replace coaches but ensure continuous growth where traditional systems fall short.

    Supporting long-term professional growth

    Most professional development is episodic, lacking continuity, and failing to align with teachers’ evolving goals. Sahota (2024) likens AI to a GPS for professional growth, guiding educators to set long-term goals, identify skill gaps, and access learning opportunities aligned with aspirations.

    AI-powered PD systems can generate individualized learning maps and recommend courses tailored to specific roles or licensure pathways (O’Connell & Baule, 2025). Machine learning algorithms can analyze a teacher’s interests, prior coursework, and broader labor market trends to develop adaptive professional learning plans (Annenberg Institute, 2024).

    Yet goal setting is not enough; as Tan et al. (2025) note, many initiatives fail due to weak implementation. AI can close this gap by offering ongoing insights, personalized recommendations, and formative data that sustain growth well beyond the initial workshop.

    Making virtual PD more flexible and inclusive

    Virtual PD often mirrors traditional formats, forcing all participants into the same live sessions regardless of schedule, learning style, or language access.

    Generative AI tools allow leaders to convert live sessions into asynchronous modules that teachers can revisit anytime. Platforms like Otter.ai can transcribe meetings, generate summaries, and tag key takeaways, enabling absent participants to catch up and multilingual staff to access translated transcripts.

    AI can adapt materials for different reading levels, offer language translations, and customize pacing to fit individual schedules, ensuring PD is rigorous yet accessible.

    Improving feedback and evaluation

    Professional development is too often evaluated based on attendance or satisfaction surveys, with little attention to implementation or student outcomes. Many well-intentioned initiatives fail due to insufficient follow-through and weak support (Carney & Pizzuto, 2024).

    Guskey’s (2000) five levels of evaluation, from initial reaction to student impact, remain a powerful framework. AI enhances this approach by automating assessments, generating surveys, and analyzing responses to surface themes and gaps. In PLCs, AI can support educators with item analysis and student work review, offering insights that guide instructional adjustments and build evidence-informed PD systems.

    Getting started: Practical moves for school leaders

    School leaders can integrate AI by starting small: use PD Workshop Planner, Gamma, or Canva to streamline agenda design; make sessions more inclusive with Otter.ai; pilot AI coaching tools to extend feedback between sessions; and apply Guskey’s framework with AI analysis to strengthen implementation.

    These actions shift focus from clerical work to instructional impact.

    Ethical use, equity, and privacy considerations

    While AI offers promise, risks must be addressed. Financial and infrastructure disparities can widen the digital divide, leaving under-resourced schools unable to access these tools (Center on Reinventing Public Education, 2024).

    Issues of data privacy and ethical use are critical: who owns performance data, how it is stored, and how it is used for decision-making must be clear. Language translation and AI-generated feedback require caution, as cultural nuance and professional judgment cannot be replicated by algorithms.

    Over-reliance on automation risks diminishing teacher agency and relational aspects of growth. Responsible AI integration demands transparency, equitable access, and safeguards that protect educators and communities.

    Conclusion: Smarter PD is within reach

    Teachers deserve professional learning that respects their time, builds on their expertise, and leads to lasting instructional improvement. By addressing design and implementation challenges that have plagued PD for decades, AI provides a pathway to better, not just different, professional learning.

    Leaders need not overhaul systems overnight; piloting small, strategic AI applications can signal a shift toward valuing time, relevance, and real implementation. Smarter, more human-centered PD is within reach if we build it intentionally and ethically.

    References

    Adams, D., & Middleton, A. (2024, May 7). AI tool shows teachers what they do in the classroom—and how to do it better. The 74. https://www.the74million.org/article/opinion-ai-tool-shows-teachers-what-they-do-in-the-classroom-and-how-to-do-it-better

    Annenberg Institute. (2024). AI in professional learning: Navigating opportunities and challenges for educators. Brown University. https://annenberg.brown.edu/sites/default/files/AI%20in%20Professional%20Learning.pdf

    awalmartparkinglott. (2025, August 5). The PD presenter that makes 4x your salary [Video]. Instagram. https://www.instagram.com/reel/DMGrbUsPbnO/

    Carney, S., & Pizzuto, D. (2024). Implement with IMPACT: A framework for making your PD stick. Learning Forward Publishing.

    Center on Reinventing Public Education. (2024, June 12). AI is coming to U.S. classrooms, but who will benefit? https://crpe.org/ai-is-coming-to-u-s-classrooms-but-who-will-benefit/

    Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute. https://learningpolicyinstitute.org/sites/default/files/product-files/Effective_Teacher_Professional_Development_REPORT.pdf

    Edthena. (2025). AI Coach for teachers. https://www.edthena.com/ai-coach-for-teachers/

    Guskey, T. R. (2000). Evaluating professional development. Corwin Press.

    Jiang, J., Huang, K., Martinez-Maldonado, R., Zeng, H., Gong, D., & An, P. (2025, May 29). Novobo: Supporting teachers’ peer learning of instructional gestures by teaching a mentee AI-agent together [Preprint]. arXiv. https://arxiv.org/abs/2505.17557

    Joseph, B. (2024, October). It takes a village to design the best professional development. Education Week. https://www.edweek.org/leadership/opinion-it-takes-a-village-to-design-the-best-professional-development/2024/10

    Kraft, M. A., Blazar, D., & Hogan, D. (2018). The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence. Review of Educational Research, 88(4), 547–588. https://doi.org/10.3102/0034654318759268

    O’Connell, J., & Baule, S. (2025, January 17). Harnessing generative AI to revolutionize educator growth. eSchool News. https://www.eschoolnews.com/digital-learning/2025/01/17/generative-ai-teacher-professional-development/

    Sahota, N. (2024, July 25). AI energizes your career path & charts your professional growth plan. Forbes. https://www.forbes.com/sites/neilsahota/2024/07/25/ai-energizes-your-career-path–charts-your-professional-growth-plan/

    Tan, X., Cheng, G., & Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education: Artificial Intelligence, 8, 100355. https://doi.org/10.1016/j.caeai.2024.100355



    Source link

  • Higher Education Inquirer : Nonprofits and Nothingness: Follow the Money

    Higher Education Inquirer : Nonprofits and Nothingness: Follow the Money

    In the world of higher education and its orbiting industries—veteran-serving nonprofits, student-debt advocacy groups, educational charities, “policy” organizations, and campus-focused foundations—there is a great deal of motion but not always much movement. Press releases bloom, awards are distributed, partnerships are announced, and donors beam from stages and annual reports. Yet too often, the people who most need substantive support—servicemembers, student-loan borrowers, contingent faculty, low-income students, and other working-class communities—receive only fragments of what the glossy brochures promise.

    To understand why, you need only follow the money.

    The Neoliberal Philanthropy Trap

    Over the last four decades, American nonprofit culture has been reshaped and disciplined by neoliberal capital. So-called “impact philanthropy” and “venture philanthropy” introduced a corporate mindset: donors expect brand alignment, flattering metrics, and ideological safety. The result is a nonprofit sector that frequently mimics the institutions it claims to critique.

    Organizations become risk-averse. They avoid structural analysis. They sidestep direct confrontation with the powerful. They produce white papers instead of organizing. They praise the very elite funders who limit their scope.

    The most severe problems facing servicemembers and veterans—predatory for-profit schools, Pentagon-to-college corruption pipelines, GI Bill waste, chronic under-support—rarely get the oxygen they deserve. Advocacy groups that rely on neoliberal donors often focus on “financial literacy” workshops rather than taking on the multi-billion-dollar scams that actually trap servicemembers in debt.

    Student-debt nonprofits, similarly, lean into “awareness campaigns” and technocratic fixes that avoid challenging lenders, profiteering institutions, or federal policy failures. Many will deliver testimonials and infographics, but few will call out the philanthropic class whose own investments are entangled in servicing and securitizing student debt.

    And when it comes to helping working-class people more broadly—those navigating food insecurity, unstable housing, wage stagnation, and the crushing costs of education—the nonprofit sector too often does what neoliberal donors prefer: it performs compassion rather than redistributing power. It focuses on individual resilience rather than collective remedy.
    Appearance Over Impact

    This creates a strange ecosystem in which organizations are rewarded for looking productive rather than for being productive.

    • Events over empowerment.
    • Reports over results.
    • Branding over coalition-building.
    • Strategy sessions over structural change.

    The donor’s name gets its plaque, its press release, its tax receipt. The nonprofit gets to survive another cycle. But the problems—deep, persistent, systemic—remain unchallenged.

    Nonprofits that speak too directly about exploitation in higher education risk alienating the very people who write the checks. Some are nudged away from naming predatory universities. Others are steered toward “innovation,” “entrepreneurship,” or “student success” frameworks that sanitize the underlying issues. Many are encouraged to “partner” with the same institutions harming the people they were formed to help.

    In the end, we get a sector filled with earnest staff but hollowed-out missions—organizations doing just enough to appear active but rarely enough to threaten the arrangement that keeps donors comfortable and inequality intact.

     
    What Could Be—If Nonprofits Were Free

    Imagine a nonprofit sector liberated from neoliberal constraints:
    Organizations could openly challenge predatory colleges instead of courting them as sponsors.
    Veteran-serving groups could expose fraud rather than “collaborate” with federal contractors.
    Debt-advocacy groups could organize mass borrower actions rather than hold polite policy forums.
    Working-class students could find allies who fight for public investment, not piecemeal philanthropy.

    We could have watchdogs instead of window dressing.
    We could have mobilization instead of marketing.
    We could have justice instead of jargon.

    But as long as donor-driven nonprofits prioritize appearance over impact, we’re left with what might be called “nonprofits and nothingness”: organizations whose glossy public-facing work obscures the emptiness underneath.

     
    The Way Forward: Independent, Ground-Up Power

    Real change in higher education—on affordability, accountability, labor rights, and fairness—will not come from donor-managed nonprofits. It will come from independent journalism, grassroots organizing, debt-resistance movements, student-worker coalitions, and communities willing to challenge elite decision-makers directly.

    Those efforts don’t fit neatly into annual reports. They don’t flatter philanthropists. They don’t offer easy wins. But they build the kind of power that higher education, and the country, desperately needs.

    Until more nonprofits break free from the neoliberal donor leash, we should continue to follow the money—and then look beyond it, to the people whose work actually changes lives.

    Sources
    — Eikenberry, Angela. The Nonprofit Sector in an Age of Marketization.
    — Giridharadas, Anand. Winners Take All.
    — Reich, Rob. Just Giving: Why Philanthropy Is Failing Democracy.

    Source link

  • Multilingual Digital Education: Expanding Access Beyond English

    Multilingual Digital Education: Expanding Access Beyond English

    For decades, English has dominated the global education ecosystem. While it opened doors for many, it also quietly closed them for millions of learners worldwide who do not speak English fluently. In today’s digital era, however, a powerful shift is underway. Multilingual digital education is emerging as one of the most effective ways to make learning inclusive, accessible, and equitable for students everywhere.

    As digital platforms expand, education is no longer limited by geography—but language remains a critical barrier. Addressing this challenge is key to ensuring that digital education truly serves everyone, not just a privileged few.

    The Global Language Barrier in Education

    Despite the growth of online learning, a large portion of educational content is still delivered primarily in English. This creates obstacles for:

    • Non-English speakers
    • First-generation learners
    • Students from rural or underserved regions
    • Migrant and refugee communities
    • Adult learners returning to education

    When learners struggle to understand the language of instruction, comprehension drops, confidence weakens, and dropout rates increase. Research consistently shows that students learn more effectively when taught in a language they understand well, especially during foundational learning years.

    This is where multilingual digital education becomes transformative.

    What Is Multilingual Digital Education?

    Multilingual digital education refers to online learning platforms, tools, and content that are available in multiple languages, enabling learners to access the same high-quality education regardless of their primary language.

    This includes:

    • Video lessons with multilingual narration or subtitles
    • Localized course materials and assessments
    • AI-powered real-time translation
    • Voice-based learning in native languages
    • Digital textbooks adapted for cultural relevance

    By removing language as a barrier, digital education becomes more inclusive and learner-centric.

    Why Multilingual Learning Matters in the Digital Age

    1. Improves Learning Outcomes

    Students understand concepts faster and retain knowledge better when learning in their strongest language. Multilingual content reduces cognitive overload caused by language translation in the learner’s mind.

    2. Builds Learner Confidence

    When students can participate without fear of language mistakes, engagement increases. This leads to better classroom interaction, stronger self-expression, and improved academic performance.

    3. Supports Educational Equity

    Language-inclusive platforms help bridge the gap between privileged learners and underserved communities, ensuring that access to quality education does not depend on language fluency.

    4. Encourages Lifelong Learning

    Adults who may have avoided education due to language barriers are more likely to upskill and reskill when learning is available in familiar languages.

    The Role of Technology and AI in Multilingual Digital Education

    Modern technologies are accelerating the growth of multilingual education globally.

    AI and Machine Translation

    AI-driven tools now enable accurate content translation, voice-to-text learning, and real-time subtitles—making multilingual delivery scalable and cost-effective.

    Adaptive Learning Platforms

    These platforms detect learner preferences and automatically deliver content in the most suitable language, improving personalization.

    Mobile Learning Apps

    Mobile-first platforms offering multilingual support are reaching learners in remote and low-connectivity regions, ensuring education is portable and flexible.

    Global Impact of Multilingual Digital Learning

    Across continents, multilingual digital education is driving meaningful change:

    • Higher enrollment and completion rates
    • Increased learner confidence and participation
    • Better understanding of technical and vocational skills
    • Preservation of linguistic and cultural diversity

    Education delivered in multiple languages does not reduce global unity—it strengthens it.

    Challenges in Implementing Multilingual Education

    Despite its benefits, several challenges remain:

    • Maintaining accuracy and quality across languages
    • Addressing cultural nuances in learning content
    • Training educators for multilingual digital delivery
    • Balancing scalability with local relevance

    However, advancements in AI, open educational resources, and global collaboration are rapidly solving these challenges.

    The Future of Global Digital Education

    The future of education is not English-only—it is inclusive, multilingual, and learner-driven. As digital learning becomes mainstream, platforms that prioritize language accessibility will lead the next generation of education.

    Global education organizations, EdTech companies, and institutions are increasingly recognizing that language inclusion is not optional—it is essential.

    When students learn in a language they understand, education becomes more than information delivery; it becomes empowerment.

    Frequently Asked Questions (FAQs)

    Why is multilingual digital education important?

    It removes language barriers, improves comprehension, and ensures equitable access to education for non-English speakers worldwide.

    Is multilingual education effective online?

    Yes. Studies show learners perform better academically and remain more engaged when learning is available in a familiar language.

    How does AI support multilingual learning?

    AI enables real-time translation, speech recognition, adaptive content delivery, and personalized multilingual learning experiences.

    Does multilingual education replace English learning?

    No. It supports foundational learning while allowing learners to gradually develop additional language skills, including English.

    Conclusion

    Multilingual digital education is transforming global learning by breaking language barriers and opening doors for millions of learners worldwide. It ensures that education is not limited by language, geography, or background. As digital platforms expand, embracing language diversity will be essential to building a fair, effective, and truly global education system.

    Education should be understood by all, because access to knowledge should never depend on the language someone speaks.

    Find more edutech articles and news on this website

    Source link

  • Facing Criticism, Weber State Says It Will Be “More Nuanced”

    Facing Criticism, Weber State Says It Will Be “More Nuanced”

    Photo illustration by Justin Morrison/Inside Higher Ed | masa44/iStock/Getty Images | rawpixel

    After multiple censorship controversies over the past two months, Weber State University has announced a “revised approach” to how it enforces a sweeping anti–diversity, equity and inclusion law that the Utah Legislature passed in 2024. But it remains unclear exactly how it will change its actions.

    “With help from the Utah Commissioner of Higher Education, Weber State is currently reviewing our existing guidance, and where appropriate, will revise that guidance to be more nuanced in its understanding of where and how learning happens on our campuses,” interim president Leslie Durham wrote in a message to campus Friday. The Salt Lake Tribune reported earlier on the announcement.

    The goal, Durham wrote, “is to uphold the letter and spirit of the law, but also to ensure we remain fiercely committed to free speech, academic freedom, and fostering an environment where everyone at WSU feels welcome to express their thoughts, engage different viewpoints, and learn from one another.” She said that “we are learning from early and well-intentioned efforts at working within this new framework.”

    The university didn’t provide an interview or answer multiple written questions Tuesday. Richard Price, a political science professor, told Inside Higher Ed in an email, “As far as I know, faculty played no role in the creation of the existing approach and I doubt faculty will play a role in this process.”

    The Weber State controversies illustrate how universities have differed in implementing the anti-DEI laws that many red states have passed, and in navigating the Trump administration’s various anti-DEI orders and guidance that impact the whole nation. Shortly after Trump retook the White House, the American Association of University Professors issued a statement saying that “under no circumstances should an institution go further than the law demands.” Since then, state and federal government attacks on diversity programs and restrictions on speech have continued and universities have struggled with how to respond.

    Kristen Shahverdian, director of PEN America’s campus free speech program, has decried what she called “Weber State’s overreach.” But she told Inside Higher Ed Tuesday, “There’s a lot of confusion in how to interpret these bills that are vague and, in some cases, sloppily written.”

    Weber State made national headlines in October for censoring a conference ironically titled, Redacted: Navigating the Complexities of Censorship. A few days before the conference was to start, an official at the public institution ordered a student presenter to remove all references to DEI from their slides.

    Organizers ended up canceling the event after faculty pulled out in protest. The uncertified employee union held a teach-in instead, but it was also censored.

    That wasn’t the end of Weber State’s speech restrictions. Late last month, Apache writer Darcie Little Badger announced on Bluesky she was withdrawing as keynote speaker at the university’s annual Native Symposium because the university sent her a list of 10 prohibited words and concepts, including “bias,” “oppression” and “racial privilege.”

    “I will not humor this censorship,” Little Badger wrote. “It does a disservice to the stories I’m discussing & the audience, who deserve unfettered access to information & conversation.”

    ‘Prohibited Discriminatory Practices’

    Little Badger said the move seemed “to be the university‘s extreme attempt to comply with HB 261,” the same 2024 anti-DEI law the institution cited to censor the censorship conference. House Bill 261 bans Utah’s public colleges and universities from engaging in “prohibited discriminatory practices,” which lawmakers defined in a long list.

    That list includes affirmative action, consideration of “personal identity characteristics” in state financial aid decisions, anything “referred to or named” DEI and programs asserting that “meritocracy is inherently racist or sexist” or that an individual, by virtue of their “personal identity characteristics, bears responsibility for actions committed in the past by other individuals with the same personal identity characteristics.”

    The catalog of what constitutes “prohibited discriminatory practices” echoes the laws banning “divisive concepts” passed by other red states, which appear to borrow language from an anti-DEI executive order Trump signed in his first term.

    HB 261 explicitly says it doesn’t restrict academic research or “academic course teaching in the classroom.” The canceled censorship conference was sponsored by the university’s Student Access and Success division, and the Native Symposium was advertised on the university’s Student Success Center website, so neither might have been deemed “academic.”

    Shahverdian, of PEN America, stressed the difficulty in interpreting such laws.

    “How would a guest speaker be able to know if they’re engaging in any of these prohibited concepts?” she said, adding that it puts them in an “impossible position.”

    But Shahverdian said it’s good that Weber State is, as she put it, “acknowledging that they have not been implementing the law correctly.” In a country where fear is driving university officials to overcomply and leading to canceled speaking engagements, she noted that Little Badger’s refusal to go along appears to have elicited change.

    “In this moment, where we’re seeing so much censorship, it is a nugget of hope,” Shahverdian said.

    Source link