Blog

  • Focus on the Consideration Phase for Midfunnel Marketing- Archer Education

    Focus on the Consideration Phase for Midfunnel Marketing- Archer Education

    Why the Consideration Phase Is the Key to Higher Ed Enrollment ROI 

    Lead volume alone is not a reliable indicator of a higher education institution’s enrollment health. While inquiries reflect prospective students’ awareness of and initial interest in a program, they don’t offer much insight into whether the students feel supported, informed, or confident enough to move forward with applying to the program. 

    The real opportunity lies in how institutions engage with students during the consideration phase — when they are actively evaluating the program’s fit, feasibility, and alignment with their needs. 

    For adult and online learners in particular, this phase is anything but passive. These students often enter the enrollment funnel highly motivated, driven by career changes, job losses, personal goals, or life transitions. But without consistent engagement, that motivation can fade. 

    Unlike traditional undergraduates, adult learners typically don’t have counselors, parents, or peers guiding them through deadlines and decisions. In many cases, they’re navigating this process on their own, often while juggling work, family, and competing priorities. 

    Effective midfunnel marketing efforts provide these students with the clarity and reassurance they need to move from inquiry to application. When institutions focus on this phase intentionally, they can reduce prospects’ uncertainty, ease their decision-making process, and improve the institution’s overall enrollment efficiency. 

    The Risks of Ignoring the Midfunnel

    Institutions that don’t maintain consistent engagement with prospective students throughout the consideration phase risk creating doubt in the students’ minds. Without sustained communication, students may disengage entirely or choose a different school that provides them with clearer direction and more reliable follow-up. 

    One of the most common reasons students stall during this phase is because they encounter friction. An institution’s admissions requirements may be unclear. Its application steps may be difficult to navigate. Its financial aid information — a major factor influencing students’ decision-making process — may be overwhelming or poorly explained. For adult learners, even small barriers can become stopping points when their time and attention are limited. 

    By neglecting the midfunnel phase, institutions also run the risk of losing track of potential students altogether. Without a proactive nurturing strategy, prospects can fall into the gap between inquiry and application. If institutions aren’t intentionally tracking and engaging with students during this window, qualified students may simply drift away — not because they lost interest, but because there wasn’t anyone there to guide them forward.   

    How Engagement Shapes the Consideration Phase 

    Effective engagement during the consideration phase depends on consistent, personalized communication across channels. Students have different preferences, which is why multichannel outreach across text messaging, email, and chat is essential. Some students prefer phone conversations, while others engage far more readily through texts or emails they can review on their own time. 

    Archer’s AI-Enabled Admissions approach makes this level of coordinated, cross-channel outreach more scalable, enabling institutions to deliver tailored messaging without overwhelming their admissions and nurturing teams. 

    A personalized approach is especially important for adult and online learners. These students aren’t just evaluating academic programs — they’re assessing whether an institution understands their goals and concerns. Thoughtful engagement with them helps remove any barriers they face by explaining the institution’s processes, clarifying its timelines, and proactively addressing the students’ questions about financial aid and transfer credits.

    Behavioral data about prospective students enhances a university’s midfunnel marketing efforts by providing insight into what matters most to each student. Understanding where the students spend their time — what they click on, revisit, or ignore — helps identify their areas of hesitation. When used responsibly, this data enables timely, relevant follow-up with prospects that feels supportive rather than intrusive. 

    Engagement strategies should also adapt to students’ different profiles. Bachelor’s degree prospects and master’s degree prospects behave differently and require distinct outreach cadences. Monitoring students’ engagement patterns allows institutions to fine-tune their communication frequency and modality to accommodate each student’s habits and preferences. 

    Converting Intent Into Enrollment 

    The midfunnel phase is often defined as the consideration phase between inquiry and application, but a prospect’s deliberations rarely stop there. In reality, students continue making decisions throughout the enrollment journey, as they evaluate a school’s financial aid offer, transfer credit outcome, registration process, and time-to-completion expectation. Each step introduces new variables that can either reinforce students’ confidence or trigger their hesitation.

    Successful enrollment marketing strategies support students throughout this entire decision-making arc. Providing students with clear guidance, continuous communication, and proactive check-ins helps them move from interest to intent to application and enrollment. The goal is not excessive hand-holding but rather supportive co-piloting, providing students with the direction they need to move forward with clarity. 

    This is particularly important for adult learners who haven’t been to school in years and may feel nervous or unsure about the process. When things don’t go as expected or they run into a challenge, they may start to question whether they can make it work. Regularly engaging with students at key milestone points — such as during registration and the first week of classes — helps reinforce their sense of preparedness. 

    Ultimately, success during the enrollment funnel process should be measured by conversion quality, not raw lead counts. Strong conversion rates indicate that students are well informed, supported, and confident — conditions that also contribute to better retention and degree completion outcomes.  

    Key Takeaways

    • The consideration phase is where enrollment momentum is built or lost, making sustained midfunnel engagement an essential step in improving conversion outcomes.
    • AI-enabled engagement tools allow for scalable, personalized communication with prospects that keeps institutions connected to them throughout their decision-making process.

    Making the Consideration Phase Count

    The consideration phase is where prospective students’ enrollment momentum is either strengthened or lost. Institutions that invest in sustained, personalized midfunnel marketing efforts can reduce the friction students face and help them move forward with confidence. By staying engaged with prospects at this critical phase, colleges can convert interest into intent — and intent into lasting enrollment outcomes. 

    Archer Education partners with accredited universities to improve their performance during the consideration phase through tech-enabled, personalized marketing strategies. Using techniques that range from multichannel engagement powered by Onward to data-informed enrollment management solutions, we help institutions stay connected with prospective students when it matters most. 

    To learn more about how Archer supports sustained student engagement and conversion growth, connect with our team today. 

    Source link

  • What FIRE’s critics get wrong about our ICE app lawsuit

    What FIRE’s critics get wrong about our ICE app lawsuit

    FIRE is suing Attorney General Pamela Bondi and Secretary of Homeland Security Kristi Noem for strong-arming Facebook and Apple to censor groups and apps that use public information to report ICE activity. Whether on Facebook, in an app, on a website — or even through flyers, pamphlets, or word-of-mouth — Americans enjoy a fundamental First Amendment right to document and criticize law enforcement

    But some people have raised objections centered on the relationship between free speech and law enforcement. So let’s answer some common criticisms we’ve faced.

    There’s no First Amendment exception for “doxxing.” First, “doxxing” is not a legal term with a stable, accepted definition. While people generally use it to mean publicly identifying someone, usually online, different people will have different understandings about what does or doesn’t count as “doxxing.”  And right now, federal government officials are using “doxxing” in an aggressive and expansive way as an all-purpose verb that blurs the line between protected speech and unprotected conduct. They’re suggesting the former can somehow constitutionally be punished. 

    Second, the core content shared on our clients’ “Eyes Up” app and “Chicagoland” Facebook group involved observations, photos, and videos of government agents carrying out enforcement activity in public. Posting information about what law enforcement officers — public servants working in public — are doing and where they’re doing it isn’t “doxxing.” It’s speech protected by the First Amendment, especially if that person is a law enforcement officer operating on public streets and sidewalks. What these platforms did was remind law enforcement that what’s done in public is public knowledge. 

    Shutting down speech under any vague rationale, much less “doxxing,” chills lawful speech and makes it harder to hold government officials and agents accountable when they violate the law.

    Social media comment about conspiring against ICE

     Here are some scary-sounding words: conspiracy, harassment, targeting, intimidation, misinformation. These days, it’s trendy on both sides to cast speech you don’t like in these terms. If you were skeptical when the Biden administration labeled COVID-related posts as “misinformation” and pressured Meta and X to ban them, you should be equally skeptical now. The government can’t handwave away constitutional protection with ominous buzzwords. 

    To lose constitutional protection, speech must fit within one of the First Amendment’s narrow unprotected categories. And the government can prosecute people for physically interfering with ICE operations or assaulting an officer. For example, physically blocking an officer from entering a government building, or standing in front of an officer’s vehicle to prevent it from moving. Speech or expressive conduct could be grounds for charges only if they rise to the level of, say, constitutionally unprotected incitement or the obstructive conduct described here. 

    In 1969, the Supreme Court held in Brandenburg v. Ohio that prosecutions for incitement based on speech are constitutional only where “advocacy is directed to inciting or producing imminent lawless action that is likely to occur.” That’s a high bar, and for good reason. Eyes Up and Chicagoland don’t come close to it. 

    In fact, Chicagoland moderators actively removed content that even suggested violence. Eyes Up didn’t provide real-time location data, and its moderators individually approved each video before posting. That’s not calling for violence at all, let alone imminent violence. 

    Social media comment about obstructing ICE

    You’re right, Archduke Food Baby: The First Amendment doesn’t literally say “you can record law enforcement.” But as with other constitutional rights, the First Amendment’s meaning doesn’t just come from bare words on the page. It comes from the underlying intent to protect speech — and speakers from government overreach — as articulated by court decisions giving those words constitutional force. Those decisions are clear: Every federal appeals court to address the issue has recognized a First Amendment right to record government officials like police engaged in their duties in public. 

    It bears repeating: The government can prosecute people for physically interfering with ICE operations or assaulting an officer. But the government can’t ban lawful tools just because someone else could (or did) use that tool to commit a crime. If a person uses Google maps to find an ICE facility and vandalize it, the government couldn’t just shut down Google maps. When we’re talking about speech, the First Amendment doesn’t bend the knee — even if that makes law enforcement more difficult because officers have to take constitutional interests into account. 

    Social media comment about obstruction II

    This case isn’t about blowing whistles or blocking roads. It’s about the government pressuring private companies into censoring protected speech on private platforms. 

    That said, the First Amendment protects sounds. While DarkTechObserver might think that blowing a whistle isn’t “criticism,” the First Amendment doesn’t let the government censor expression based on the content of that expression — or the method used to convey it, whether it’s a pen or a whistle. Of course, depending on the context, the government can place reasonable content- and viewpoint-neutral limits on how loud you can be in public. It just can’t base those decisions on the content of what you’re saying. 

    Besides, there’s a difference between physically preventing law enforcement from operating and simply being loud at a protest. Whistles themselves are not obstruction, and as we explain above, the government can’t ban a tool just because that tool can be used in potentially illegal ways. 

    These are fact-specific questions that are difficult to answer in the abstract. But one thing is certain: We don’t just take the government’s word for it that a line has been crossed. The First Amendment requires the government to prove that a whistle, a bullhorn, or even people standing in the road, mean to prevent carrying out lawful law enforcement activity. 

    Social media user accuses FIRE of hypocrisy

    Yes. FIRE was very critical of the Biden administration’s jawboning efforts. Jawboning is just censorship by proxy: If it’s illegal for the government to censor certain speech directly, then using a middleman doesn’t change a thing. As nonpartisan free speech defenders, we have, and always will, call out censorship by both sides. 

    In 2021, officials under President Biden pressured social media companies to take down COVID-related posts in the name of public health — a classic case of jawboning. FIRE filed an amicus brief in the ensuing case, Murthy v. Missouriarguing that the Biden administration violated the First Amendment by attempting to interfere with private content moderation. The Court ultimately held that the plaintiffs didn’t have the ability to sue because they couldn’t prove the government’s actions were the direct cause of the harm to the plaintiffs’ speech. But Justice Amy Coney Barrett, writing for the majority, suggested that the government’s actions would have been unconstitutional had the plaintiffs been able to show the government directly caused the speech restrictions.

    FIRE also filed an amicus brief on the National Rifle Association’s behalf in the Supreme Court’s 2024 case NRA v. Vullo. The Vullo Court held that a New York state official violated the NRA’s constitutional rights by threatening regulatory enforcement against banks and insurance companies that did business with the gun rights group. Jawboning was wrong then, and it’s wrong now.

    Source link

  • Road testing the new TEF data dashboard

    Road testing the new TEF data dashboard

    This blog was kindly authored by Professor Janice Kay CBE, Director, Higher Futures ([email protected])

    Overall verdict: Compared with the TEF 2023 Data Dashboard, the latest one handles more like the dash of a modern EV. Go steadily at first and have a play, safe in the knowledge that you aren’t going to unnecessarily damage the respectable vehicle you are looking at in the 360-degree camera. The new TEF Data Dashboard is a very powerful instrument and at first look it is clear and intuitive. But go more deeply into the data using the Filters for example, you will need to go from no experience needed to more expert skill.

    The fundamental of maintaining and improving the student experience is data. Data, in the form of statistically benchmarked indicators, inform understanding, and in the right hands, direct strategic delivery. Universities are keen on institutional big interventions, often running several at the same time, sometimes without clarity gained from what data are telling them. Often, these aren’t evaluated well, don’t work or run into the sand. Staff become cynical, students either unaware or confused. Benchmarked data help prioritisation and selection for effective delivery. Reliable Data Dashboards are essential.

    And, therefore, for those who love data and understanding competitor positions, the newly TEF Data Dashboard, launched this week, is essential to integrated quality and improvement.

    I tested it in TEF Panel member mode and looked at the data for a variety of Providers whose indicators and performance I had known in TEF 2023. This included universities (low to high tariff institutions), colleges (FEC, private providers) and specialists. I thought about it as I would if I were assessing the TEF performance of higher education provider across Student Experience and Outcomes. And from the perspective of a provider wishing to understand their data over the Time Series.

    For both functions, the improved power and handling are very welcome. Start as you would have done in the 2023 dashboard by choosing either Experience or Outcomes and you are presented with a series of deceptively simple tabs. The Overall Experience tab presents performance across basic sections of the National Student Survey by Measure (e.g. Teaching on My Course, Assessment and Feedback) and Mode (e.g. Full Time, Part-Time, Degree Apprenticeship).

    Gone are the complicated illustrations of overall distribution and variability, showing central tendencies alongside spread of results. Instead, data are simply numerical and colour coded. They reference statistical confidence of whether a result is materially above, broadly in line or materially below, benchmark, and it’s extremely easy to evaluate how an institution is doing overall. No judgment needed, the Dashboard does it for you. It will be extremely helpful for providers, reviewers and student representatives alike.

    It gets better. To really understand and to be able to improve performance, one of the most crucial elements is to know how consistent the overall findings are, by measure (NSS) and mode (FT, PT etc) and Time. Unless an institution can drill down to its Split benchmarked data – how provider subjects and student groups are doing over time – it’s impossible to get a grip on understanding what’s going well and what isn’t and to work out how to improve performance.

    The Split Consistency tab provides you with that information at a glance. Still welcome is the Partnership Split which gives clear guidance about performance of partnership students, with franchised degrees, for example.

    The Split overview display focuses on the performance of the Splits, making it easier to see whether and by how much data are inconsistent. Take Teaching on my Course for a particular provider as an example. Imagine it is overall Broadly in Line with benchmark (marked with a black circle). How much are subgroups consistent with this performance? The display gives you an immediate answer: consistent with the overall pattern (broadly in line in this case) will be in a blank cell, inconsistent will be in other cells, above or below benchmark. It is therefore easy to see whether performance of full-time students across different subgroups is consistent or inconsistent for Teaching on my Course.

    Of course, this was more than possible to do in previous dashboards, but it required manual search and some degree of judgement – was there an inconsistent pattern and did appear to be material? The Split Consistency tab does the work for you.

    The next tab gives you the Detailed View through which you can explore overall NSS data and Splits in even more depth. It is at this point when Dashboard use requires a bit more expertise to use. Filters are available to select and search in more detail, allowing drill down to understand inconsistent performance (e.g. across full-time and part-time, or degree apprenticeships, by subject and by student group).

    Back to the Overview page and Outcomes. You are now in the territory of Continuation, Completion and Progression, benchmarked and presented across modes of full-time, part-time, degree apprenticeships. Again Indicators (e.g. % Positive) are presented much more clearly than in the previous dashboard, and statistical confidence about materiality of difference against benchmark is given in percentage terms and colour coded. The data are also presented in a Split Consistency tab and in Detailed View, including partnership information. Whether performance is probably below B3 Threshold is usefully colour coded, and information includes that about Graduate Outcomes quintile.

    The B3 Thresholds tab will be invaluable for Provider planners, giving an immediate view of whether performance is in line with, above or below B3 thresholds, colour coded, for Continuation, Completion and Progression. Data are there by mode, level and splits: time series, taught by provider, course type, subject and student groups.

    One useful element of the 2023 TEF Data Dashboard was the ability to search (albeit manually) whether performance of an individual student group or an individual subject area is materially inconsistent across different categories of the NSS or over time: Is the performance of your Business course materially below benchmark across the various sections of the NSS, or over time, or for particular student groups. This information can be found through filtering in Detailed View – easy-peasy when you construct the right filters but requiring careful thought.

    Interpreting data does require planning analytics resource and expertise. Carried out well the new dashboard will be invaluable in helping to better understand and crystallise the areas that are doing well and those that need real attention to improve.

    I was concerned at the disappearance at the turn of the year of the 2024 TEF Data Dashboard. No longer would you be able to look across 2023 and 2024 to chart performance, assess trajectory and construct your narrative. However, the latest TEF Data Dashboard gives you the right time series to track whether your metrics journey is improving or worsening, to tell your story, so 2024 is arguably no longer needed.

    A minor gripe: Perhaps replace Red /Green colour coding with the yellow, blue option for those who are red/green insensitive? And, substantively, it would be useful to have an Overview tab that summates Experience and Outcomes.

    In conclusion: the new TEF Data Dashboard is a great innovation. It’s easy to use, even fun (for geeks like me), absorbing and intuitive in parts. It’s fully functional, ensuring initial use is not hampered by lack of expertise. But using Filters and Splits will require some practice and training. This data-driven improvement Dashboard is a great rollout.

    This blog was originally published on 23rd February and has since been updated.

    Source link

  • Open the Black Box of Faculty Salary Models (opinion)

    Open the Black Box of Faculty Salary Models (opinion)

    As a student, I always pictured myself teaching, writing on chalkboards, mentoring students and having the freedom to pursue intellectual curiosity. I imagined this career long before I knew what a salary model was and before salary ever entered the picture. Many faculty share this story, as we did not choose academia because it promised financial rewards. We chose it as we believe in the value of higher education and value the opportunity to make meaningful contributions toward knowledge production.

    A recent report from the American Association of University Professors found that, despite a 3.8 percent increase in full-time faculty salaries in fall 2024, inflation-adjusted compensation remained 6.2 percent below pre-pandemic levels. The same report reveals that growth in salaries for university presidents has outpaced the growth in faculty salaries, with the median salaries of university presidents ranging from $268,000 at public associate-granting institutions to more than $900,000 at private doctoral universities. These findings, taken together, underscore a broader, systemic concern: While institutions claim to invest in people, many faculty, having seen their wages decline in real terms, feel left behind.

    As a statistician who served for more than a decade on the faculty compensation committee and the priorities and budget committee at my institution, I have seen the evolution of salary models from the inside. A rule of thumb in the world of statistical modeling is that if all elements of a model are not visible and you cannot reproduce a model, you cannot trust it. As an expert in modeling, I can say without a doubt that faculty salary models fail the basic tests of clarity and reproducibility.

    As faculty try to understand which critical features determine the numbers on their paychecks, we realize that salary models are often unable to provide clear answers because they evolved piecemeal over decades, are shaped by constraints and ultimately represent an accumulation of ad hoc decisions, resulting in inequitable and complex systems that only a few people can fully understand. Some senior faculty may recall fragments of past rationale, but there are no clear explanations. And the newer faculty inherit a system shaped by decisions and compromises made long ago. Over time, we see the final number but not the logic behind it, and faculty salary models feel like black boxes.

    The most pressing problems regarding salary models are that they are opaque, overly complex, poorly researched and disconnected from institutional values. It is no wonder then that each year, especially as colleges and universities start counting their applicants and estimating the yield for incoming students, conversations about faculty salaries spark a mix of confusion and concern at a national level.

    A salary model need not be simple, but it should be understandable, including by:

    • Having clear definitions and an explicit role in terms of factoring in rank, years of service, years in a rank, discipline, merit, etc.
    • Having explicit information about how rank, years of service, years in a rank, discipline, merit and market factors (such as peer benchmarking, geographical cost of labor, retention pressures, inflation and demand for a discipline) interact.
    • Having transparent formulas to reproduce salary calculations.
    • Having a principled design that is equity-centered and values retention, competitiveness and compression adjustments.
    • Having predictability so the faculty can anticipate how future performance will impact compensation.

    Once all elements of the salary model are defined, discussions can shift from suspicion of hidden decision-making to questions that highlight institutional values. What does a balance in teaching, research and service look like? How can we identify and address disparities across gender, race, rank or discipline? Should merit adjustments be part of the model? How should market factors be adjusted for? What trade-offs are we willing to make as an institution under fiscal constraints? These are governance questions, not modeling mysteries.

    To move this conversation forward responsibly, we must also acknowledge the administrative perspective. Transparency is often framed as a demand placed on administrators—a demand that they are often perceived as not having met, or of not having a desire to meet. Having served alongside administrators in budget deliberations, I understand there is a constant tug-of-war between finite resources and competing priorities. There are many valid budgetary constraints, such as uncertain enrollments, inflation, deferred maintenance and the costs of maintaining academic excellence and student support. By clearly sharing the competing priorities and long-term planning considerations, the administration can invite faculty into a constructive dialogue so both sides can work together to find sustainable and equitable solutions.

    Transparency is a powerful way to restore a sense of alignment between faculty work and institutional mission, and to build a community committed to clarity, equity and shared purpose. Faculty must articulate what values they want in a model, and administration must share data, assumptions and constraints. Predictability helps faculty plan their careers; understanding how salary evolves can make them feel more secure. If faculty feel secure and confident in how their pay is determined and if administrations openly communicate the financial constraints they face, the conversation moves from “How was this number generated?” to “Is this the model we want for our community?”

    As higher education faces financial pressures and intense public scrutiny, this is a call to open the black box of faculty salary models. This begins with a commitment to openness, but it also requires a culture shift. We must stop treating faculty salary structures as overly technical and recognize them as central to equity, morale and long-term institutional sustainability. By doing this, we not only build confidence in the process but also honor the intellectual and professional contributions of faculty members. In this sense, a transparent salary model becomes a living document that reflects institutional priorities, making compensation more than a number—it can become a statement of shared values.

    Priya Kohli is a professor of statistics and chair of the Department of Mathematics and Statistics at Connecticut College.

    Source link

  • Racialization of Calif. Gov. Gavin Newsom’s 960 SAT Score

    Racialization of Calif. Gov. Gavin Newsom’s 960 SAT Score

    “I’m like you; I’m no better than you,” California governor Gavin Newsom told Atlanta mayor Andre Dickens and people in the audience at a recent event on his book tour. “I’m a 960 SAT guy. And, you know, and I’m not trying to offend anyone, trying to act all there if you got 940.” It sounded like many people in the room laughed. Newsom went on to talk about his inability to read a prepared speech.

    Rebukes of his remarks were swift and strong, especially on conservative news channels and social media.

    Tim Scott of South Carolina, our nation’s only Black Republican U.S. senator, wrote on X, “Black Americans aren’t your low bar. We’ve built empires, created movements, outworked, outhustled and outsmarted people like you. Stop using your mediocre academics as a way to patronize communities. Its [sic] ridiculous!”

    Rapper Nicki Minaj, like Scott, deemed the comments racially problematic. “His way of bonding with black ppl is to tell them how stupid he is & that he can’t read,” the 12-time Grammy-nominated artist and recent high-profile MAGA ambassador wrote on social media.

    “Take it from someone who was actually in the chair asking the questions: Context matters more than a headline,” Dickens noted in an Instagram post. “The conversation around his new book included him speaking about his own academic struggles, including not doing well on the SAT. That wasn’t an attack on anyone. It was a moment of vulnerability about his own journey.”

    This was not the governor’s first time disclosing these personal details. He has done so in settings comprised predominantly of white attendees, as well as in a March 2025 podcast interview with Turning Point USA co-founder Charlie Kirk.

    Inside Higher Ed has not authenticated this five-second clip from the Atlanta book tour event. It shows that the crowd was not entirely or perhaps even overwhelmingly Black. Newsom’s critics likely presumed it was because the event was held in Atlanta, a city that is 46 percent Black, according to U.S. Census data. Without confirmation of the demographic composition of the audience, conservative talk show host Sean Hannity declared that Newsom “thinks a 960 SAT makes him ‘like’ Black Americans.”

    The governor did not appreciate the Fox News host’s apparent double standard. “You didn’t give a shit about the President of the United States of America posting an ape video of President Obama or calling African nations shitholes—but you’re going to call me racist for talking about my lifelong struggle with dyslexia,” Newsom posted on X. “Spare me your fake fucking outrage, Sean.”

    No one, including white people across political parties, should give anybody a pass for associating Black Americans with low IQs, low SAT scores or any racist claim of intellectual inferiority. Nevertheless, in this situation involving Newsom, there are at least three noteworthy realities. First is that outrage about his comments emerged in the absence of actual data about who was in the audience.

    Second, let’s imagine for a moment that the room was indeed predominantly Black: Would their mayor, a Black man, have allowed a white Californian to make such obviously racist remarks without calling him on it or inviting him to clarify what he meant? Dickens’s laugh in response to Newsom’s statements appeared neither awkward nor cringe. While much attention has been devoted to what Scott and Minaj had to say (and, to a lesser extent, a Fox News interview with Corrin Rankin, the California GOP chair, who is Black), there has not been an avalanche of outrage expressed by Black people who were actually at the event. Their voices on this matter most.

    A long-standing critique of Democrats pandering to Black people is a third noteworthy dimension of this story. In an April 2016 Breakfast Club interview, Hillary Clinton was asked what is something that she always carries. “Hot sauce,” the then–presidential candidate replied within a split second; she required no time to contemplate this response. Some people interpreted this as a cheap attempt to appeal culturally to Black people. But, as it turns out, sources confirmed that the former secretary of state does, in fact, carry Ninja Squirrel hot sauce in her bag. Because Newsom talks so openly and repeatedly about his disability and relatively low SAT performance across a multitude of audiences, he was not seeking to connect with Black Georgians in particular over their presumably low scores on a standardized test.

    The final two points are perhaps most important. According to the College Board, makers of the SAT, the average score is 1050. U.S. Department of Education data shows it is around 908 for Black students who took the test as high school seniors. There are long-standing racial differences in performance on this exam, as well as on the ACT, GRE, LSAT, MCAT, GMAT and other standardized tests. Wealth disparities exacerbate these disparities.

    This surely is not something for a governor or anyone else to recklessly leverage in attempts to connect with Black Americans. It is not a badge of honor for most of us, because we know how useless such exams are in measuring our potential, confirming our intelligence or predicting our futures (insists a very smart and extraordinarily successful Black academician—call him a Resident Scholar—whose GRE score never reached 1000 after four attempts).

    Finally, it is plausible that Newsom wanted Georgians to understand that a dyslexic person with a 960 SAT score could become governor of our nation’s most populous state and stand a chance of being elected U.S. president in the future. If that was indeed his goal, then Newsom is right about one of higher education’s most powerful gatekeeping tools: The SAT does not determine long-term success.

    Shaun Harper is University Professor and Provost Professor of Education, Business and Public Policy at the University of Southern California, where he holds the Clifford and Betty Allen Chair in Urban Leadership. His most recent book is titled Let’s Talk About DEI: Productive Disagreements About America’s Most Polarizing Topics.



    Source link

  • The Odd Couple: Is the Presidency Dangerous?

    The Odd Couple: Is the Presidency Dangerous?

    RST: Gordon, after what happened on the campus of The Ohio State University a couple of weeks ago, I am concerned about your safety. And maybe mine. While I may not always agree with you, I will defend to the death your right to say it. But I’m not prepared to put my body on the line to protect you. Can we agree that there will be no more knocking cameras out of hands or assault and battery?

    EGG: I will try to abstain from having a fracas after a class I teach, but I cannot guarantee that some of my acolytes will not become a bit pugilistic!

    RST: If we can’t have civil discourse with those who disagree, we are well and truly screwed.

    EGG: Rachel, I feel sorry for the young faculty member.

    RST: Sorry for him how?

    EGG: Of course, he should not have shoved the guy. But this I know: They were trying to cause a ruckus, and that is not journalism.

    RST: Well, I sure don’t feel sorry for him. There is no excuse for getting physical with strangers anywhere but in the gym or on the dance floor. The video of that event, which I watched nine times, is chilling. And, I have to say, the paparazzi “journalists” don’t come out looking so great, either. They gleefully chased you down the stairs to follow you to your car. Gordon, you are an octogenarian who, while you may have more fortitude than most academics, are not such a physical specimen that you will always end up on your feet after a postclass “fracas.” I’ve become fond of you and was seriously worried for your health.

    EGG: Thanks, Rachel. And I appreciate that you called me and told me to stop being such a public pain. And you really were irritated that I was driving myself.

    RST: Gordon! So irritated! We were on the phone for an hour while you were speeding down I-70! I heard your car beeping at you. So I beeped louder.

    EGG: You are afraid I will keel over before we get this partnership in full bloom and can say all we want about the future of higher education. But, truthfully, I appreciate the concern. And it goes to the bigger point that university presidents are often like piñatas. The president of Ohio State asked me after that event whether or not I needed public safety to support me. The truth is that I know a number of presidents who find it necessary or who feel threatened.

    RST: We were going to use this next column to talk about the resistance to change. We will get there. But before we address questions about policy or politics, you’ve just hit on a hidden truth about leadership on campus: It’s that the job itself might be dangerous. Not just for reputations, but for bodies, marriages and mental health. But because presidents are public figures, people often forget that they’re also people. We see plenty of that on social media.

    EGG: I had a rule that I would never read social media. If I would have done so, I probably would never have gotten out of bed. My goal was always to keep the people who disliked me away from the people who hated me.

    RST: While I never got the memo, I know it is somewhere in the Faculty Handbook that we must hate administrators. Why did people hate you?

    EGG: In the past few years, I have had to face the reality that leadership is a combat sport. If you make decisions that are in fact in the best interest of the university, it often gores the oxes of embedded interest groups. When I decided to sell parking at Ohio State, it was viewed as “corporatization” of the university. Or, obviously, when I eliminated programs and faculty at West Virginia University, it was as if I had declared war on the academic order. In today’s world, if you are a president who can rise above individual interests and do what is right for the university and its long-term health, you need to understand that is very unpopular. The reality is in today’s world your friends come and go, but your enemies accumulate.

    RST: It’s interesting, because I don’t know a single president who isn’t looking for other revenue streams or who isn’t thinking about making cuts in program and faculty. These are times when there are no easy solutions and decisions—even in the past year—are more painful than ever. Your granddaughters will sing to you that haters gonna hate. Especially when everyone is scared for their jobs.

    But in a piece about Jonathan Holloway and Ana Mari Cauce, Len Gutkin in the Chron observed, “I was struck, shocked really, by the coincidence that both of the former college presidents interviewed last week by my colleague … had either been threatened with or had suffered physical harm while on the job.” Well, I was struck that this was a shock to someone who covers higher ed. As soon as I started having conversations with presidents, all I heard about was how scary the job can be.

    EGG: Actually, I am glad that we are having this discussion. A university president in today’s world needs to be Janus-faced. To the people you serve in the public, you want to make it as positive as possible. But in your personal time, you find it very difficult because of all of the pressures and the physical strain. I do not want to make this into a pity party, because I lived in big houses, had great support, made good salaries—

    RST: [cough]

    EGG: —and had a very energizing life. But there is a personal cost.

    RST: I’ve been hearing from presidents about death threats since I started working on The Sandbox, which launched only a few months after Oct. 7, 2023, when things really changed. I’ve seen copies of horrific emails and photos of things painted on the walls of campus buildings that make me shiver. I know a number of men who are much bigger than you and women who are even smaller than me who have had to have security details. A former president who went through hell told me it’s a “life-shortening job.” Clearly, a bunny like you has been able to take a licking and keep on ticking, but can you see why they may have said that? Are presidents just unwilling to talk about the personal toll because it comes across as whining about privilege?

    EGG: People rarely see behind the curtain. You are pushing me to speak frankly now because the presidency, particularly at this moment, is so difficult. And you are correct that presidents do not want to be seen as whining or vulnerable. There is nothing worse than people sensing blood in the water.

    RST: Have you gotten death threats?

    EGG: Yes, I have received death threats and so much hate mail. I have consistently refused to have security. Not that I am brave, but I so value my time with people unencumbered. When I would go out to the bars and parties, I would take a couple of students with me. Unfortunately they were generally as small as me, so we were not very formidable. But what I disliked most was the chattering class, which exists in universities to an unhealthy degree. The nice shunning. It affects you in such ways that you start to hibernate and lose confidence. Universities can be among the most toxic institutions.

    RST: You’ve been dining out on the same quippy stories for much of your career. I never want to hear again that your goal was to make as much money as the football coach. But I do want our readers to know about some of the stuff we talk about—and the things other presidents tell me in confidence and that they write about anonymously in The Sandbox. You said you were willing to get real. I mean, everyone goes through stuff in life. But you have always been hypervisible. You were appointed president at age 36. Ten years later, your wife died after a long illness.

    EGG: Being in public life with your spouse undergoing cancer treatment, which included long stints in the hospital and hospice care, was very difficult. I lived on fumes for three or four years while Elizabeth was receiving treatment. I had to speak to an alumni group in Dayton the night before she died of cancer [in 1991]. I should have been home at her side, which is still something that haunts me and is unforgivable. After she died, I felt both sadness and relief. At the same time, I had a 15-year-old daughter, Rebekah, at home, who was undergoing serious personal challenges, and trying to bolster her was terribly draining and exhausting.

    RST: I am so sorry, Gordon. I can only imagine what that must have been like.

    EGG: The pain of loss was profound.

    RST: How did you handle that and still manage to do your job?

    EGG: Honestly I am not certain. But I always had Rebekah. We adopted her when she was four days old. Her mother and she were constant companions. After Elizabeth died, I made the decision that Rebekah would go and be with me everywhere. We became incredibly close.

    RST: During your time at Vanderbilt, your second wife became a, um, media focus. You went through a public and horribly messy divorce. And then a year later, when you were back at Ohio State, there was an accident. Rebekah’s husband died and she sustained terrible injuries. That must have been a horrific time. Death, divorce and starting a new job are all huge stressors in life. You won the Triple Crown.

    EGG: At the height of Elizabeth’s illness, I told her I thought I should resign. She was adamant that I not do so, because she felt very strongly that I would never forgive her for being the cause of my resignation. And so the thought of losing Rebekah was truly more pain than I could bear. She was and is my best friend. I spent six months with her in hospitals and rehabilitation. Not one morning would she wake up wherever she was without me being there to tell her how much I loved her. Friends within and outside the university rallied to our cause, and I was able to continue. I believe if I would have abandoned the presidency on any of those instances, I may have made a better life, for a moment, for Elizabeth and Rebekah, but not for me and ultimately for our family. But it is damn hard and really lonely.

    RST: I always ask presidents, “Who do you really talk to?” (Do not go all grammar nerd on me, Gordon, and say it should be “whom.” I’m the English professor. I know what’s correct and I hate “whom.”) Many of them say their spouse, or no one.

    EGG: There are few people in whom you can confide. Sometimes the loneliness is unbearable. I did get myself an executive coach who has been with me for 35 years, and I recommend every president find such a person. The question is, do you share these challenges publicly?

    RST: The goal of The Sandbox is to make the hidden parts of the job visible and without fear of reprisal. Last week, after a current president wrote about his mental health struggles, I got a ton of email from others thanking me for giving him the space to be honest, though he said he could never admit to any of what he wrote with his name attached.

    EGG: I think presidents need to be more public and let people see them as human. Easy to say at 82, but I think if I had not always been the public happy warrior, I may have been more effective.

    RST: How so?

    EGG: I have had so many people tell me that I made the presidency look easy.

    RST: Well, it sure was a lot easier in the old days, like before COVID, before George Floyd, before Gaza and before the giant shit show that started in January 2025. Presidents who have been retired for more than a few years need to stop telling those still in the job what to do. And that includes you, buddy. I just warned you that when we Zoom with presidents from different institutions, you’re not allowed to hand out leadership bromides and vague advice (as you did the other day). And while we have this platform to bat ideas around, I’m deploying you in ways invisible to the public so that you can actually help those who are still presidenting. You say you want to be useful. I’m holding you to that.

    EGG: Aye, aye, boss. I am serious about that and want to pay it forward.

    RST: Fortunately, we have me to keep you honest. You also made the presidency look fun.

    EGG: I made it look fun because I had a good time. I get so irritated with presidents who complain continuously about the difficulty of their work. If it is so damn hard and so debilitating, then quit! For me, fun was always part of the equation. So much so that I was often criticized, because many people thought it was unpresidential. I think in retrospect it would have been better to humanize my work. The unforgiving nature of the job is overwhelming. As I have left institutions, the sudden invectives are debilitating. People immediately stop waving at you with all their fingers and every problem at the university was because of you.

    RST: I promise never to raise a middle finger at you. Unless you follow through on your threat to mail me stale bow-tie cookies.

    EGG: In my work, you can measure the true friends that you have because they can fit in a telephone booth. As I have moved on I have often found that your best “friends” were the first to throw you under the bus. That is the reality of the human condition. That said, the friends I have in the telephone booth are truly special.

    RST: I’m small enough to squeeze into that booth, I hope. Since you came to me and I have enough current presidents in my circle to do my job (getting them to write anonymously), I need nothing from you and am pleased to offer you a friendship of equals. As long as you do everything I say and respond to me immediately. It’s kind of like being an employee, Grasshopper. This is a chance for you to build a new skill set.

    EGG: What is it about you that makes people trust you and in my instance publicly open my kimono? This was a cathartic discussion. Thank you, boss.

    RST: Ugh. Could have done without that visual image. I know it’s hard to drop the optimistic pose but appreciate that you’re willing to get real. Proof that old dogs can learn new shit.

    EGG: I prefer new tricks. Now on to resistance to change.

    Rachel Toor is a contributing editor at Inside Higher Ed and the co-founder of The Sandbox. She is also a professor of creative writing. E. Gordon Gee has served as a university president for 45 years at five different universities—two of them twice. He retired from the presidency July 15, 2025.

    Source link

  • How to: assess candidates for executive and non executive roles in universities: assess what needs assessing

    How to: assess candidates for executive and non executive roles in universities: assess what needs assessing

    Author:
    Julia Roberts

    Published:

    This blog was kindly authored by Julia Roberts, Founder and Principal Consultant at Julia Roberts Advisory.

    It is the final blog in our four-part ‘How To’ series that focuses on recruitment in higher education leadership roles. The first blog, on working with executive search, can be found here. The second, on recruiting non-executives, can be found here. The third blog, on writing job descriptions and person specifications can be found here.

    As we close this series, one truth has become clear across every piece: the quality of an appointment is shaped by the clarity you establish at the start and your discipline in holding to that clarity all the way through. Universities often do the hard thinking when rewriting the job description and person specification, but somewhere in the selection process, subjectivity creeps in. A confident performance feels persuasive. A familiar background feels comfortable. A well told story can overshadow weaker evidence. And without noticing, the panel drifts away from the criteria it committed to.

    This drift is subtle but consequential. When panels abandon clarity in favour of instinct, they appoint the candidate who performs best in the room, not the one who is best aligned to the work. This is why discipline matters, and it is why assessment must be anchored in a simple principle:

    Assess what needs assessing. Do not assess what is easiest, most familiar or most comfortable. And do not assess subjectively when the role requires objective evidence.

    What can be measured can be assessed. What can be evidenced can be evaluated. What has been defined clearly can be held to consistently.

    Executive roles: evidence of leadership through others

    For executive appointments, the temptation is to reward the strongest individual performer. But in universities, leadership is not about personal efficiency. It is about making others more effective. That difference is profound.

    Panels must look for evidence that the person can create clarity, enable others, build capability, resolve conflict, make sound judgements with imperfect information, strengthen teams across boundaries and understand digital risk, including AI and cyber resilience.

    None of this is subjective. Candidates should offer concrete examples of what they have done, how they did it and the impact it achieved. If they cannot provide evidence, you cannot reliably assess them.

    Non executive roles: contribution over comfort

    Non executive appointments require stewardship, clarity of thought and an ability to interrogate risk. Panels should look for constructive challenge, understanding of public accountability, awareness of digital and AI vulnerabilities and a mindset of contribution, not ego.

    Again, none of this is subjective. You are assessing cognitive skill, judgement and value add.

    Holding your clarity from start to finish

    The hardest part of assessment is resisting the subtle pull of subjectivity during discussion. When conversations drift, return to your anchor:

    What did we say this role needs?

    What outcomes must this candidate deliver?

    What evidence have we heard?

    Are we assessing what needs assessing?

    Key Takeaways

    1. Clarity at the outset only matters if you hold to it at the end.
    2. Assess what needs assessing.
    3. Do not be subjective. What can be measured can be assessed.
    4. Executive roles require evidence of enabling leadership.
    5. Non executive roles require stewardship and strategic challenge.
    6. Digital and AI awareness are non negotiable.
    7. Value cultural contribution, not cultural fit.
    8. Hold the line when discussion drifts.

    Source link

  • Wires within wires – the hidden complexity of managing higher education assessment

    Wires within wires – the hidden complexity of managing higher education assessment

    Back in the day, assessment was a relatively self-contained process: students produce a thing, academics mark the thing, results get recorded in a spreadsheet and then published. These days, that’s an outdated mental model – and not just because of AI.

    A recent conversation with members of the Academic Registrars’ Council Assessment Practitioner Group offers a guided tour of an extraordinarily intricate machine, one that most students – and probably quite a few staff, too – barely know is running. Exploring the mechanics of administering assessment in contemporary higher education sheds light on what on the face it might look like a “process” issue, but in reality takes in strategy, policy, pedagogy, technology, and institutional culture.

    Digital first

    The starting point is always disciplinary variation: unless you are literally a single-programme institution it is impossible to design a singular institutional system for managing assessment within a defined timeframe. Professional bodies have specific expectations and requirements, and different disciplines have different assessment cultures. Any assessment management process already has to accommodate significant disciplinary and programme differences in timing and mode of assessment.

    By now, the journey towards digitisation of assessment is all but complete, unless you’re talking about a laboratory or clinical practical exam. Even traditional invigilated exams, where these exist, typically require digitisation at some point in their journey, or may be undertaken on locked-down devices. The end-to-end assessment “process” – from the moment an academic team designs an assessment, through submission, marking, moderation, ratification, and the eventual release of results into the student record system – has as a consequence become remarkably complicated, and involves a lot more people than it used to. Digital assessment – in both senses of assessments as digital rather than physical artefacts, and of the assessment management process being conducted primarily by digital means – requires a much greater degree of central input, via IT teams, academic registry and learning technologists.

    Course administration teams are now routinely engaging with VLE systems that might previously have been managed by dedicated learning technology units. IT teams are supporting students through digital processes that touch every stage of the assessment lifecycle. The coordination is more complex, the linkages through systems – VLEs, student record systems, integration platforms – are more numerous, and the number of colleagues who need to understand and operate within those systems has grown. Technology offers the prospect of streamlining assessment and ensuring consistency and accuracy in marking and moderation. But the number of moving parts required to achieve that streamlining effect is significant.

    Enter AI

    AI has obviously thrown a major spanner into efforts to create a seamless digital “flow” for assessment. Many institutions had shifted away from in-person exams during the Covid-19 pandemic in favour of other forms of assessment. This shift tallied closely with ongoing efforts to develop more engaging, authentic forms of assessment, building in more choice for students, in more diverse modes of assessment, and ideally, reducing the overall number of assessments.

    The immediate pressure to secure academic standards in the wake of the advent of generative AI pushed some back into the exam hall. Once you’re there, the logic of digital means it’s much more efficient (in one sense) if exams are undertaken digitally. But digital examinations also create additional complexity, requiring supported devices configured in a particular way, locked down browsers, technical support on the day, and invigilators who can troubleshoot not just exam room behaviour but also password lockouts and software glitches.

    Outside the realm of the locked down assessment, there’s a recognition that a diet of exams alone isn’t going to serve student learning well and that as AI becomes ever more deeply embedded into knowledge work using AI judiciously and strategically will itself become part of the assessment and associated learning outcomes. There’s an open question for how long it makes sense to adopt a policy of “declare”. Thinking and practice is evolving rapidly; what was viewed as problematic yesterday might begin to feel OK by tomorrow. At some point, the argument goes, AI becomes part of the fabric of how work is done. But that “at some point” is doing a lot of heavy lifting, and given the pace of technological change higher education institutions are not in a position to declare a collective consensus on where the line falls.

    Being reasonable

    Less high profile but in some ways much more critical, is the increasing number of students who require reasonable adjustments in their assessments, or who are putting in requests for extenuating circumstances. This reflects the degree to which students are themselves juggling the complexities of disability, periods of ill health and responsibilities outside the classroom. “Inclusive by design” is the aspiration for most curricula but there will always be exceptions, and collective understanding of how students’ needs manifest in assessment settings is also changing rapidly, leading to a greater number of requests for flexibility than can easily be integrated into existing processes.

    One member of the network observed that policies are typically written in a way that does not account for the likelihood of scale – they bake in an assumption that flexibility in assessment processes will be the exception rather than a norm. What would be a minor and reasonable administrative burden in the interests of a level playing field for a minority of students quickly becomes immense when the numbers increase. There’s a growing sense that something needs to change – not just in how these claims are processed, but in the underlying policy assumptions.

    The overarching purpose of assessment regulations is to maintain and safeguard academic standards, but some of the traditions embedded in those regulations may be doing less to uphold standards than to create hurdles that students must clear, without any real benefit to academic integrity. As one member of the network suggested, seeing the experience from the student’s perspective helps to frame institutional policy as “supporting students through assessment rather than punishing students through assessment.” If, for example, one network member asked, referral marks were not capped, would there still be a need for the current extenuating circumstances infrastructure? It’s a thought experiment, not a policy recommendation, and there are arguments on both sides. But it illustrates a broader willingness to ask “first principles” questions about whether there are ways of being more supportive of students while still protecting standards.

    Regulation may be a barrier to experimentation for some. When external regulation takes a hardline approach to standards, it can make institutions more cautious about the kind of innovative policy rethinking that could serve students better. Navigating that territory requires a careful balance between doing the right thing educationally and managing the risks of attracting unwelcome regulatory attention.

    Efficiency is only one of the watchwords

    As the sector deals with the implications of a shrinking resource base, it’s not surprising that academic registrars report feeling a pressure to streamline, seek efficiencies and demonstrate a process with as few administrative overheads as possible. While nobody disagrees in principle with the need for efficiency, it simply is not possible to talk about assessment without talking about systems integration, institutional policy and strategy, the student experience, the onward march of technology, the demands of professional bodies, and a funding model that leaves very little room for investment. Each of these factors connects to all the others.

    In a policy landscape where there is much, sometimes glib, talk of efficiency and transformation, it is worth keeping in mind that the people who actually run the institutional processes are not dealing with lumbering bureaucracy. Instead, they are dealing with a dense and high stakes challenge that touches every part of the institution, and that is shaped by every part of the institution’s external environment. Managing it well is not about streamlining it into simplicity. It is about building the institutional capacity – in people, systems, policy, and culture – to hold that complexity intelligently.

    This article is published as part of a partnership with UNIwise. Debbie would like to thank the members of the Academic Registrars’ Council Assessment Practitioner Group for their insight in developing this article. To inquire about joining the network contact group chair Rebecca Di Pancrazio, academic registrar at the University of Portsmouth.

    Source link

  • Universities to reveal VC pay, consultant spend – Campus Review

    Universities to reveal VC pay, consultant spend – Campus Review

    The federal government will cut unnecessary red tape in higher education in return for universities revealing to the public how much vice-chancellors and consultants are paid.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Sector must back First Nations ATEC role – Campus Review

    Sector must back First Nations ATEC role – Campus Review

    An Indigenous higher education leader has warned the sector must back the First Nations commissioner of the proposed Australian Tertiary Education Commission (ATEC) if Indigenous students and staff are to succeed.

    Please login below to view content or subscribe now.

    Membership Login

    Source link