Positionality statements are increasingly mandated by some journals and publishers.
These statements accompany empirical work and invite authors to articulate their social, identity, methodological, and epistemological locations, to provide much-needed context (or “position”) for their research. This practice is becoming more common, and it originates from reflexive qualitative research.
We would argue that, despite their critics, positionality statements are relevant to all research – whether qualitative, quantitative, or both – because all research is ultimately shaped, coloured, and contextualised by the people who conduct it.
The value of reflexivity
Reflexivity is the process by which researchers explicitly and thoughtfully consider how their background, epistemological and ethical positions, worldview, politics, and personal experiences contribute to, shape, and colour their research process. Reflexivity is increasingly recognised as central to responsible and rigorous research practice, because acknowledging one’s subjectivity is seen as a benefit – or even a toolkit and a resource – rather than a detriment, in this context.
While reflexivity has a long history in qualitative traditions, recent shifts in higher education have encouraged reflexive engagement within quantitative research as well (for example in some work we’ve done, such as this beginner’s guide). This is a promising and important way forward.
Quantitative approaches and researchers do not exist outside of social context. Therefore, choices about research questions, data sources, analytical techniques, and interpretation are all influenced by researchers’ epistemological commitments, their lived experiences, and subjectivities. After all, every researcher has a story (often a personal one) for how they came to (and through) their research topic.
Positioning positionality
One emerging mechanism for expressing and documenting reflexivity is the positionality statement. Positionality statements allow researchers to explicitly reflect on and communicate how their subjectivities, identities, values, experiences, and epistemological positioning shape their research questions, methodological decisions, and interpretation of their findings. We would argue that greater reflexivity and transparency is needed across all methodological traditions, including (and, in some cases, especially) in quantitative research, where assumptions of neutrality or objectivity can obscure the role of the researcher in knowledge production.
The literature, which we are happy to see is growing rapidly, currently highlights three reasons why positionality statements are useful for higher education researchers.
The first concerns equality, diversity, and inclusion. By documenting the process of reflexivity, positionality statements encourage researchers to confront how their work may uphold or challenge existing inequalities and to reflect on whose perspectives are centred or marginalised through their research choices. This logic applies, in theory, regardless of whether a study uses interviews, surveys, experiments, or administrative data because decisions about samples, categories, and comparisons are never socially neutral.
The second reason relates to rigour and transparency. By articulating one’s positionality explicitly, researchers make visible aspects of the research process that are often left implicit (or on the editing room floor). This transparency has been proposed as a way of strengthening rigour, allowing readers to better understand how knowledge claims are produced and how alternative assumptions might lead to different conclusions. Far from weakening research, we argue that this kind of openness supports more robust, accountable, and transparent scholarship.
And the third reason concerns integrity and ethics. Researchers have argued that positionality statements can support more ethical engagement with research by encouraging ongoing reflection on power, responsibility, and potential harms. Ethics is not confined to formal approval procedures – instead, it should be embedded in everyday research practices, including how findings are framed and communicated. Thinking reflexively can surface some of these considerations.
Positionality statement in quantitative work
Despite their growing presence, positionality statements in quantitative research remain inconsistently applied and under-theorised. This is concerning, given that some journals now encourage or mandate their inclusion.
It is increasingly important, therefore, that positionality statements are done well and researchers are supported adequately. As others have noted, positionality statements are often reduced to brief descriptions or checklists of a researcher’s social identity or background, without deeper engagement with how those positionalities influence the research process. When this happens, positionality risks becoming tokenistic rather than a meaningful reflexive practice.
Towards reflexive (quantitative) research
There is growing interest now in better understanding how researchers currently use positionality statements and what functions these statements serve within published research.
Between us, for example, we are currently running metascience projects that examine how and why positionality statements are used, particularly within quantitative higher education research. This work builds on broader conceptual frameworks for understanding positionality in psychology and related fields, and reflects a wider movement towards more reflexive quantitative methods,
What this emerging body of work makes clear is that positionality statements are not about personal disclosure for its own, performative sake. They are also, importantly, not about questioning the legitimacy of particular methods or ranking researchers according to identity.
Instead, positionality statements (if done well) are about situating knowledge claims and acknowledging that research is produced by people working within specific social, institutional, and epistemic contexts. By recognising that positionality statements are relevant to all research, this supports a much more transparent, rigorous, and ethically engaged research culture across disciplines. In this sense, positionality statements are not an optional add-on. If we are clever, they could be an integral part of what it means to do rigorous, thoughtful, ethical research.
With the 2026 Senedd elections approaching, a new HEPI and London Economics report sets out what different higher education funding reforms in Wales could mean in practice.
Assessing potential manifesto commitments on higher education funding in Wales models the financial and distributional impact of three prominent policy options linked to the parties currently leading the polls, offering rare clarity in a debate where formal manifesto commitments have yet to appear.
The analysis examines the consequences of keeping Plan 2 loans, reducing maintenance grants for Welsh students studying elsewhere in the UK and abolishing loan interest while extending repayment terms to 45 years. The modelling shows starkly different outcomes. Cutting maintenance grants would reduce Exchequer costs but leave affected students worse off upfront, while eliminating interest rates would significantly increase public spending while lowering repayments for many graduates. The report also highlights how fiscal constraints imposed by the UK Treasury may sharply limit what any incoming Welsh Government can realistically deliver.
At a time when Wales’s funding settlement is under mounting pressure, this evidence-based assessment provides essential insight into who would gain, who would lose and how sustainable each option would be. For anyone seeking to understand the real trade-offs facing policymakers – and the likely direction of travel after May’s election – this is essential reading. Click here to read the press release and find a link to the full report.
In a statement to the Crimson, Harvard’s student newspaper, Summers said the decision was “difficult” and that he was “grateful to the thousands of students and colleagues I have been privileged to teach and work with since coming to Harvard as a graduate student 50 years ago.”
“Free of formal responsibility, as President Emeritus and a retired professor, I look forward in time to engaging in research, analysis, and commentary on a range of global economic issues,” he said.
Summers corresponded with Epstein for years after his 2008 conviction, at one point seeking advice about how to pursue a younger colleague and calling Epstein a “very good wingman.” Over the last several months, Summers has also stepped down from his teaching role at Harvard and resigned from the OpenAI Board of Directors. The New York Times declined to renew his contract with the Opinion section, the Center for American Progress ended his fellowship and Summers stepped away from an advising role at the policy research center Budget Lab at Yale University. In past public remarks, Summers has said he is “deeply ashamed” of his actions and takes responsibility for continuing to communicate with Epstein after he was convicted of soliciting sex from a minor in 2008. Summers has not been implicated in any of Epstein’s crimes.
Also Wednesday, Harvard placed mathematics professor Martin Nowak on paid administrative leave while the university investigates his ties to Epstein, the Crimson reported. The university previously sanctioned Nowak in 2021 for facilitating Epstein’s presence at Harvard. The sanctions were lifted in 2023.
Richard Axel, a professor of pathology and biochemistry at Columbia University, announced Tuesday he would step down from his role as co-director of the Zuckerman Mind Brain Behavior Institute to “focus on research and teaching in my lab.” He will also resign from his role as an investigator at Howard Hughes Medical Institute.
Axel first got to know Epstein in the 1980s, The New York Times reported. In a 2007 New York magazine profile about Epstein, Axel described him as “extremely smart and probing” and said, “He has the ability to make connections that other minds can’t make.” Axel also had dinner with Epstein and helped the children of Epstein’s associates try to gain admission to Columbia. Axel has not been implicated in any criminal activity.
“My past association with Jeffrey Epstein was a serious error in judgment, which I deeply regret,” Axel wrote in a statement. “I apologize for compromising the trust of my friends, students, and colleagues. I recognize the problems this has caused, and I will work to restore this trust. What has emerged about Epstein’s appalling conduct, the harm that he has caused to so many people, makes my association with him all the more painful and inexcusable.”
Also in recent weeks, Bard University announced it had opened an external investigation into the communication between Epstein and university president Leon Botstein. The university is also delaying a New York gala celebrating Botstein.
Every admissions CRM contains a neglected segment: inquiries who downloaded a viewbook, began an application, attended a webinar, or requested program information, then disengaged. Over time, these records accumulate and are often written off as lost opportunities.
The typical response is to increase acquisition activity, launch another broad campaign, or invest further in top-of-funnel media. Yet in today’s enrollment funnel, acquisition costs continue to rise. Reactivation is often the more efficient growth lever.
A well-structured student lead re-engagement campaign enables institutions to recover dormant interest without damaging sender reputation, improve ROI across the recruitment funnel, and convert marketing-qualified leads without increasing ad spend. When grounded in behavioral segmentation and lead scoring, reactivation efforts can shorten time to application and strengthen overall CRM performance.
The difference between spam and strategy is not volume. It is segmentation, timing, and message design. This article outlines how to build a reactivation campaign rooted in behavioral data, admissions CRM logic, and practical enrollment outcomes.
Do you need help strengthening your lead nurturing?
Our expert digital marketing services can help you attract and enroll more students!
What Is a Re-Engagement Campaign in Higher Education?
A re-engagement campaign is a structured marketing automation workflow designed to reconnect with prospective students who previously showed interest but have since become inactive within the admissions CRM.
In higher education recruitment, inactivity may include no email engagement for 60 to 120 days, stalled application progress after initial inquiry, event attendance without subsequent action, partial application abandonment, or incomplete program inquiry submissions.
These signals indicate friction, shifting priorities, or unresolved objections, not necessarily a loss of interest.
A student lead re-engagement campaign is not a generic “we miss you” message. It is a targeted intervention triggered by behavioral data and aligned with the prospect’s position in the enrollment funnel. Messaging, timing, and offer design are tailored to the specific barrier identified, whether informational, financial, academic, or motivational.
When executed strategically, re-engagement strengthens lead nurturing for higher education and improves conversion efficiency without expanding acquisition spend.
Why Re-Engagement Matters More in 2026
Institutions across North America are operating in a more complex recruitment environment. Paid media costs continue to rise. Privacy restrictions limit granular tracking. Decision cycles are longer. Prospective students compare more programs, more formats, and more institutions before committing.
Capturing an inquiry is no longer the finish line. It is the starting point.
Example: The University of Toronto uses a program-discovery and program-level content structure (“program-specific landing pages” in practice) and pairs it with a formal inquiry capture pathway: the “Connect with U of T” form registers prospects for personalized content and explicitly allows them to revise submitted info and unsubscribe.
However, inquiry volume alone does not ensure progression through the enrollment funnel. Without structured follow-up, early engagement dissipates.
Example: The University of British Columbia uses a stage-based application guidance architecture that organizes admissions content by applicant phase and clearly separates steps, requirements, and timelines within the enrollment process. Its Applying to UBC hub structures progression logic so prospects understand exactly where they are and what comes next. For a re-engagement strategy, this model makes it easier for admissions teams to identify where a student stalled, trigger stage-specific reactivation messaging, and reduce friction by clarifying next steps rather than sending generic reminders.
This reflects a core reality: engagement is cyclical. Prospects move forward, pause, reassess finances or program fit, explore alternatives, and often return.
A strong student lead re-engagement campaign acknowledges this behavior. Instead of assuming silence equals disinterest, it treats inactivity as a predictable phase within the enrollment journey and plans targeted interventions accordingly.
Step 1: Segment Before You Send
The most common mistake in student lead re-engagement campaigns is batch-and-blast outreach. Dormant leads are not a single audience. They paused at different points, for different reasons.
Instead, build behavioral segmentation rules based on:
1. Funnel Stage
Inquiry only
Application started
Application submitted
Offer received
Deferred
Each stage requires different messaging. An inquiry-stage prospect needs program clarity and reassurance. A deferred applicant may need deadline reminders or financial guidance.
Example: McGill University structures its admissions experience around clearly segmented applicant and admitted student pathways, differentiating communications and content based on enrollment status. Through its Undergraduate Admissions Apply hub, McGill separates application instructions from admitted student next steps while clearly outlining documentation requirements and procedural expectations. For a re-engagement strategy, this model enables precise funnel-stage segmentation within the CRM, prevents misaligned messaging between applicants and admitted students, and supports targeted reactivation campaigns based on a prospect’s exact enrollment phase.
Your admissions CRM should reflect comparable segmentation logic. Reactivation must align with where progression stopped.
2. Engagement History
Use behavioral segmentation criteria such as:
Pages visited
Program categories viewed
Email clicks
Event attendance
Downloaded assets
Example: Western University structures its recruitment ecosystem around faculty-based academic segmentation and regionally informed audience pathways, organizing content by academic area while personalizing messaging based on geography and prospect type. Through its admissions hub, Western aligns program content with declared academic interests and provides differentiated entry points for domestic and international students. For a re-engagement strategy, this structure enables behavior-based follow-up that references specific faculty interest, improves email relevance by aligning with prior browsing patterns, and strengthens mid-funnel reactivation precision by grounding outreach in demonstrated intent.
Lead scoring enables prioritization rather than reactive mass outreach.
Example scoring logic:
+10 for campus visit registration
+5 for webinar attendance
-10 for 90 days of inactivity
+15 for application start
When a lead’s score drops below a defined threshold, trigger a reactivation campaign. Lead scoring aligns admissions automation with behavioral signals, not arbitrary calendar timelines.
Step 2: Design Content That Restarts the Conversation
The objective of a student lead re-engagement campaign is not to increase pressure. It is to remove friction. When prospects disengage, it is usually because a concern remains unresolved, whether academic, financial, logistical, or personal. Effective reactivation content addresses that barrier directly.
Strong re-engagement offers are specific, practical, and stage-aware.
1. Program Fit Tools
Many dormant leads are still evaluating whether a program aligns with their goals. Offer tangible evaluation resources such as career pathway guides, program comparison charts, sample course breakdowns, or graduate outcome reports.
Example: The University of Waterloo integrates employment outcomes and career data directly into its program ecosystem, embedding co-op statistics, employer data, and graduate employment metrics within academic program pages. By explicitly connecting fields of study to career pathways and measurable post-graduation results, Waterloo emphasizes return on investment and long-term outcomes as part of the recruitment narrative. For the re-engagement strategy, this approach supports ROI-focused reactivation messaging, addresses mid-funnel evaluation concerns related to employability, and provides strong content assets for prospects who stalled during program comparison.
A re-engagement email that references verified employment outcomes or alumni pathways reframes the conversation around long-term return on investment. This is significantly more persuasive than a generic reminder to “complete your application.”
2. Application Clarity Content
For partially completed applicants, uncertainty about next steps often causes delay. Provide content that explains what happens after submission, outlines key dates, clarifies document requirements, or details how decisions are communicated.
Example: The University of Alberta emphasizes step-by-step admissions process transparency through structured application checklists, clearly defined document requirements, published deadlines, and detailed explanations of decision timelines. By outlining what applicants need to submit and what happens after submission, the institution reduces ambiguity within the enrollment process. For a re-engagement strategy, this clarity helps reduce application-stage uncertainty, supports abandoned application workflows, and enables messaging focused on “here’s exactly what to expect next,” which can restore momentum for stalled prospects.
Mirroring this clarity within your admissions CRM reduces ambiguity. A reactivation message that says, “Here’s exactly what to expect next,” lowers perceived complexity and encourages progression.
3. Financial Planning Support
Cost is one of the most common application stall points. Offer scholarship breakdowns, tuition estimators, financial planning webinars, or FAQs about payment schedules.
Example: The University of Calgary centralizes financial planning and scholarship information within its undergraduate admissions ecosystem, consolidating tuition details, award opportunities, funding tools, and cost guidance. By presenting clear breakdowns of expenses alongside financial planning resources, the institution reduces ambiguity around affordability and cost structure. For the re-engagement strategy, this approach directly addresses cost-related application stalls, supports funding-focused reactivation campaigns, and provides targeted content for leads who disengaged during financial consideration stages.
Re-engagement campaigns that highlight funding tools acknowledge a core decision factor rather than avoiding it.
4. Student Stories and Social Proof
Prospects often disengage when they struggle to picture themselves at an institution. Peer narratives reduce that distance.
Including relatable student stories in reactivation emails reconnects emotion with intent. Social proof reinforces belonging, which frequently precedes action.
Re-engagement succeeds when content answers the question that caused silence.
Step 3: Use Preference Centers Instead of Unsubscribes
One overlooked tactic in higher education marketing automation is the strategic use of a preference center. When leads disengage, the default response is often an unsubscribe link. A smarter approach offers flexibility before exit.
Instead of forcing a binary decision, allow prospects to adjust how they hear from you. This can include switching intended programs, selecting a different intake term, reducing email frequency, or choosing specific content types such as scholarships, events, or program updates.
Example: University of Colorado Denver offers an International Admissions “subscription preferences” workflow that explicitly offers “subscribe,” “update my information,” and “change my subscription preference,” and states users can “select your email preferences.” This is a clear example of consent-based preference management for prospective students.
Offering preference updates protects sender reputation, reduces spam complaints, and preserves long-term lead value. A dormant prospect may not be uninterested. They may simply need different timing or information.
Include a clear “Update email preferences” call to action within your re-engagement campaign. Flexibility sustains engagement more effectively than pressure.
Subject lines in student lead re-engagement campaigns should signal relevance, not desperation. When a prospect has paused, urgency alone rarely restores momentum. Clarity and contextual alignment perform better.
Each example acknowledges prior interest and reopens the conversation without assuming commitment.
Avoid:
Excessive urgency
Multiple exclamation points
Aggressive countdowns
Overly forceful language can trigger unsubscribes or spam complaints, particularly among leads who are undecided rather than disengaged.
Remember, the objective is conversation, not pressure. Re-engagement succeeds when the subject line reflects timing, context, and the prospect’s stage within the enrollment funnel.
Step 5: Build a Structured Reactivation Campaign Workflow
A strong student lead re-engagement campaign follows a defined progression rather than a single reminder email. Structure reinforces intent and prevents overcommunication.
A typical workflow includes:
Email 1: Contextual Reminder Reference prior engagement, such as a downloaded guide, event attendance, or started application. Reestablish relevance.
Email 2: Value Add Provide a useful resource aligned to the prospect’s stage, such as a funding guide, program comparison tool, or admissions walkthrough. Avoid deadline pressure.
Email 3: Preference Confirmation Offer the opportunity to update email preferences, change program interest, or select a different intake.
Email 4: Soft Close Ask directly, “Would you like to stay connected?”
If there is no response, suppress the lead from active campaigns to protect deliverability, but retain them for future intake cycles and strategic reactivation windows.
Timing Guidelines for Admissions Automation
Re-engagement timing should reflect funnel velocity, not arbitrary calendar triggers.
60 days of inactivity for inquiry-stage leads
30 days of inactivity for application-stage prospects
14 days of inactivity for deposit-stage students
Later funnel stages require shorter intervention windows because decision urgency increases as prospects approach commitment.
These intervals must align with your institution’s enrollment cycle. Use admissions CRM reporting to calculate the median number of days between inquiry and application, and between application and deposit. Reactivation triggers should be calibrated to actual progression data, ensuring interventions occur before momentum is lost.
Behavioral Segmentation in Action
Behavioral segmentation becomes meaningful when applied to real admissions CRM scenarios rather than abstract categories.
Consider Lead A. This student downloaded an Engineering brochure, attended a virtual lab tour, and then showed no activity for 75 days. The behavioral signals indicate strong STEM interest and high academic intent, followed by a mid-funnel stall. The friction point is likely evaluation rather than awareness.
A suitable reactivation offer would include engineering alumni career outcome data, internship and co-op placement statistics, and a program comparison sheet outlining specializations. The objective is to reinforce return on investment and clarify differentiation.
Now consider Lead B. This prospect started an application but abandoned it at the financial section. The segment reflects application-stage friction, likely related to cost or funding uncertainty.
An effective reactivation offer would include a funding webinar invitation, a financial aid checklist, and direct contact details for an admissions advisor. Different behavioral segments require distinct messaging strategies. This precision is the foundation of effective student lead re-engagement campaigns.
Protecting Sender Reputation
Protecting sender reputation is central to any student lead re-engagement campaign. Large institutions such as UBC and McGill sustain strong email deliverability by enforcing disciplined segmentation, authentication standards, and database hygiene practices.
Technical safeguards should include properly configured SPF, DKIM, and DMARC authentication protocols to protect domain integrity and prevent spoofing. When launching reactivation workflows after long periods of inactivity, domain warming strategies should be used to gradually reintroduce segmented audiences rather than sending to large dormant lists at once.
Best practice begins with removing chronically inactive contacts rather than continuing to send to disengaged records. Engagement-based suppression logic, such as excluding contacts with 12 months of inactivity or repeated non-opens, protects inbox placement and reduces spam complaint risk. Admissions teams should monitor bounce rates, unsubscribe trends, spam complaint signals, and engagement decay over time to identify emerging deliverability risks.
Reactivation campaigns should strengthen database quality, not compromise it. When segmentation logic, authentication protocols, and inactivity thresholds are enforced consistently, re-engagement workflows protect domain health while preserving long-term recruitment opportunity. Deliverability is not separate from enrollment performance. It underpins it.
How This Fits into the Student Recruitment Funnel
The student recruitment funnel typically includes:
Awareness
Inquiry
Application
Offer
Enrollment
Student lead re-engagement campaigns operate most effectively between Inquiry and Application, and between Application and Deposit. These are the stages where intent exists but momentum has slowed.
At the inquiry stage, prospects may be evaluating fit, cost, or program structure. At the application stage, friction often relates to documentation, funding, or uncertainty about next steps. Structured reactivation workflows intervene before disengagement becomes permanent.
By reactivating stalled leads:
Marketing-qualified leads increase.
Admissions staff workload becomes more focused.
Institutions frequently observe improved cost efficiency per enrolled student when reactivation workflows are integrated with segmentation and scoring logic.
Rather than expanding top-of-funnel acquisition, institutions extract greater value from existing inquiry volume.
This approach is particularly important for institutions recruiting across multiple provinces and internationally, where decision cycles are longer, and comparison behavior is higher. Re-engagement provides controlled, stage-aware intervention without increasing acquisition spend.
How do you promote student engagement? Through:
Behavioral segmentation
Lead scoring
Relevant content offers
Timely follow-ups
Preference flexibility
Engagement increases when communication reflects prior behaviour.
Stop Buying What You Already Own
Every dormant lead represents prior investment. Paid media budget generated the inquiry. Content production supported the download or event registration. CRM infrastructure stores and tracks the records. Staff time nurtured the initial interaction. When that lead stalls, the sunk cost remains.
A well-structured student lead re-engagement campaign converts underutilized database volume into a renewed enrollment opportunity. Rather than defaulting to additional acquisition spend, institutions can reactivate intent that already exists within their admissions CRM.
Institutions that integrate behavioral segmentation, lead scoring, admissions automation, and structured preference management consistently outperform those relying on repetitive top-of-funnel activity. Precision outperforms volume.
Reactivation is not a secondary tactic. It is a core efficiency strategy within higher education marketing campaigns, particularly in a cost-sensitive, privacy-restricted environment.
Do you need help strengthening your lead nurturing?
Our expert digital marketing services can help you attract and enroll more students!
FAQs
Q: What are re-engagement campaigns?
A: A re-engagement campaign is a structured marketing automation workflow designed to reconnect with prospective students who previously showed interest but have since become inactive within the admissions CRM.
Q: How do you promote student engagement?
A: Through:
Behavioral segmentation
Lead scoring
Relevant content offers
Timely follow-ups
Preference flexibility
Engagement increases when communication reflects prior behaviour.
Q: What are the 4 C’s of student engagement?
A: Common frameworks define the 4 C’s as:
Communication
Connection
Community
Commitment
In recruitment marketing, this translates into consistent messaging, relational storytelling, peer proof, and clear next steps.
The United Kingdom isn’t just focused on age-gating and regulating what its citizens can see and do on the internet through its Online Safety Act. Now, officials are setting their sights on what people can stream, expanding their regulatory focus beyond local television channels and into the workings of non-UK companies like Netflix.
If the censorial headaches unleashed by the Online Safety Act’s attempts to crack down on “harmful content” on the web are any indicator, residents of the UK — as well as the streaming companies officials intend to regulate and their global audiences who watch them — have reason to worry.
This week, communications regulator Ofcom announced “enhanced” regulation for video-on-demand services with more than 500,000 UK-based users. Some of the requirements will address accessibility features such as subtitles, but there will also be a significant focus on the aired material itself: specifically “harmful or offensive material.” Platforms with user bases of this size will be subject to a forthcoming video-on-demand code modeled on rules already in place against stations like the BBC.
“Similar to the Broadcasting Code, this will ensure that news is reported accurately and impartially and audiences are protected against harmful or offensive material,” a UK government press release explains. “Audiences will be able to complain to Ofcom if they see something concerning, and Ofcom will have powers to investigate, and take action, where they consider there has been a breach of the code.”
The Broadcasting Code, which will no doubt influence the upcoming rules for streaming services, explains that TV and radio stations must “provide adequate protection for members of the public from the inclusion in such services of harmful and/or offensive material.”
Programs containing “harmful” material including “offensive language, violence, sex, sexual violence, humiliation, distress, violation of human dignity, [and] discriminatory treatment or language” must be “justified by the context.” That context can include the time and service during which the material is aired but also much more nebulous concepts like “the degree of harm or offence likely to be caused by” it or “the effect of the material on viewers or listeners who may come across it unawares.”
This will very likely mean bad news for UK citizens who don’t want this kind of government overreach into the streaming services they watch and who can, in theory at least, decide for themselves what material is too offensive or harmful for their TVs and tablets. But it also could be bad news for the rest of the world, too.
The risk of fines for ambiguously defined harmful content — and fines will no doubt be a penalty for breaches of the not-yet-written code — could very well pressure streamers like Amazon Prime and Netflix to make editorial decisions not to greenlight content that might be perceived as offensive by UK regulators, including shows and projects intended for a global audience. Companies might decide it isn’t worth the risk of either fines in the UK or the production cost of creating content that will be inaccessible to its UK-based audience.
And that can affect far more people than just UK citizens. As FIRE recently explained in its principles for defending speech on the global internet, the interconnectedness of today’s world means that regulations passed in one country can have significant “potential to bleed across borders,” including into the U.S.
We’ll keep an eye on these forthcoming regulations to see what it means for streaming companies — and, perhaps, what airs on our screens, too.
Why the Consideration Phase Is the Key to Higher Ed Enrollment ROI
Lead volume alone is not a reliable indicator of a higher education institution’s enrollment health. While inquiries reflect prospective students’ awareness of and initial interest in a program, they don’t offer much insight into whether the students feel supported, informed, or confident enough to move forward with applying to the program.
The real opportunity lies in how institutions engage with students during the consideration phase — when they are actively evaluating the program’s fit, feasibility, and alignment with their needs.
For adult and online learners in particular, this phase is anything but passive. These students often enter the enrollment funnel highly motivated, driven by career changes, job losses, personal goals, or life transitions. But without consistent engagement, that motivation can fade.
Unlike traditional undergraduates, adult learners typically don’t have counselors, parents, or peers guiding them through deadlines and decisions. In many cases, they’re navigating this process on their own, often while juggling work, family, and competing priorities.
Effective midfunnel marketing efforts provide these students with the clarity and reassurance they need to move from inquiry to application. When institutions focus on this phase intentionally, they can reduce prospects’ uncertainty, ease their decision-making process, and improve the institution’s overall enrollment efficiency.
The Risks of Ignoring the Midfunnel
Institutions that don’t maintain consistent engagement with prospective students throughout the consideration phase risk creating doubt in the students’ minds. Without sustained communication, students may disengage entirely or choose a different school that provides them with clearer direction and more reliable follow-up.
One of the most common reasons students stall during this phase is because they encounter friction. An institution’s admissions requirements may be unclear. Its application steps may be difficult to navigate. Its financial aid information — a major factor influencing students’ decision-making process — may be overwhelming or poorly explained. For adult learners, even small barriers can become stopping points when their time and attention are limited.
By neglecting the midfunnel phase, institutions also run the risk of losing track of potential students altogether. Without a proactive nurturing strategy, prospects can fall into the gap between inquiry and application. If institutions aren’t intentionally tracking and engaging with students during this window, qualified students may simply drift away — not because they lost interest, but because there wasn’t anyone there to guide them forward.
How Engagement Shapes the Consideration Phase
Effective engagement during the consideration phase depends on consistent, personalized communication across channels. Students have different preferences, which is why multichannel outreach across text messaging, email, and chat is essential. Some students prefer phone conversations, while others engage far more readily through texts or emails they can review on their own time.
Archer’s AI-Enabled Admissions approach makes this level of coordinated, cross-channel outreach more scalable, enabling institutions to deliver tailored messaging without overwhelming their admissions and nurturing teams.
A personalized approach is especially important for adult and online learners. These students aren’t just evaluating academic programs — they’re assessing whether an institution understands their goals and concerns. Thoughtful engagement with them helps remove any barriers they face by explaining the institution’s processes, clarifying its timelines, and proactively addressing the students’ questions about financial aid and transfer credits.
Behavioral data about prospective students enhances a university’s midfunnel marketing efforts by providing insight into what matters most to each student. Understanding where the students spend their time — what they click on, revisit, or ignore — helps identify their areas of hesitation. When used responsibly, this data enables timely, relevant follow-up with prospects that feels supportive rather than intrusive.
Engagement strategies should also adapt to students’ different profiles. Bachelor’s degree prospects and master’s degree prospects behave differently and require distinct outreach cadences. Monitoring students’ engagement patterns allows institutions to fine-tune their communication frequency and modality to accommodate each student’s habits and preferences.
Converting Intent Into Enrollment
The midfunnel phase is often defined as the consideration phase between inquiry and application, but a prospect’s deliberations rarely stop there. In reality, students continue making decisions throughout the enrollment journey, as they evaluate a school’s financial aid offer, transfer credit outcome, registration process, and time-to-completion expectation. Each step introduces new variables that can either reinforce students’ confidence or trigger their hesitation.
Successful enrollment marketing strategies support students throughout this entire decision-making arc. Providing students with clear guidance, continuous communication, and proactive check-ins helps them move from interest to intent to application and enrollment. The goal is not excessive hand-holding but rather supportive co-piloting, providing students with the direction they need to move forward with clarity.
This is particularly important for adult learners who haven’t been to school in years and may feel nervous or unsure about the process. When things don’t go as expected or they run into a challenge, they may start to question whether they can make it work. Regularly engaging with students at key milestone points — such as during registration and the first week of classes — helps reinforce their sense of preparedness.
Ultimately, success during the enrollment funnel process should be measured by conversion quality, not raw lead counts. Strong conversion rates indicate that students are well informed, supported, and confident — conditions that also contribute to better retention and degree completion outcomes.
Key Takeaways
The consideration phase is where enrollment momentum is built or lost, making sustained midfunnel engagement an essential step in improving conversion outcomes.
AI-enabled engagement tools allow for scalable, personalized communication with prospects that keeps institutions connected to them throughout their decision-making process.
Making the Consideration Phase Count
The consideration phase is where prospective students’ enrollment momentum is either strengthened or lost. Institutions that invest in sustained, personalized midfunnel marketing efforts can reduce the friction students face and help them move forward with confidence. By staying engaged with prospects at this critical phase, colleges can convert interest into intent — and intent into lasting enrollment outcomes.
Archer Education partners with accredited universities to improve their performance during the consideration phase through tech-enabled, personalized marketing strategies. Using techniques that range from multichannel engagement powered by Onward to data-informed enrollment management solutions, we help institutions stay connected with prospective students when it matters most.
To learn more about how Archer supports sustained student engagement and conversion growth, connect with our team today.
In the years leading up to the American Revolution,
newspapers and pamphlets overflowed with essays signed “Publius,”
“Brutus,” and “A Farmer.” Those arguments helped shape a nation,
but the authors’ real names were nowhere to be found.
Americans have long relied on anonymous speech to
challenge the powerful, protect dissenters, and keep the focus on
ideas rather than identities. That tradition has endured into
America’s digital age, even as anonymous speech has become more
controversial.
FIRE is suing Attorney General Pamela Bondi and Secretary of Homeland Security Kristi Noem for strong-arming Facebook and Apple to censor groups and apps that use public information to report ICE activity. Whether on Facebook, in an app, on a website — or even through flyers, pamphlets, or word-of-mouth — Americans enjoy a fundamental First Amendment right to document and criticize law enforcement.
But some people have raised objections centered on the relationship between free speech and law enforcement. So let’s answer some common criticisms we’ve faced.
There’s no First Amendment exception for “doxxing.” First, “doxxing” is not a legal term with a stable, accepted definition. While people generally use it to mean publicly identifying someone, usually online, different people will have different understandings about what does or doesn’t count as “doxxing.” And right now, federal government officials are using “doxxing” in an aggressive and expansive way as an all-purpose verb that blurs the line between protected speech and unprotected conduct. They’re suggesting the former can somehow constitutionally be punished.
Second, the core content shared on our clients’ “Eyes Up” app and “Chicagoland” Facebook group involved observations, photos, and videos of government agents carrying out enforcement activity in public. Posting information about what law enforcement officers — public servants working in public — are doing and where they’re doing it isn’t “doxxing.” It’s speech protected by the First Amendment, especially if that person is a law enforcement officer operating on public streets and sidewalks. What these platforms did was remind law enforcement that what’s done in public is public knowledge.
Shutting down speech under any vague rationale, much less “doxxing,” chills lawful speech and makes it harder to hold government officials and agents accountable when they violate the law.
Here are some scary-sounding words: conspiracy, harassment, targeting, intimidation, misinformation. These days, it’s trendy on both sides to cast speech you don’t like in these terms. If you were skeptical when the Biden administration labeled COVID-related posts as “misinformation” and pressured Meta and X to ban them, you should be equally skeptical now. The government can’t handwave away constitutional protection with ominous buzzwords.
To lose constitutional protection, speech must fit within one of the First Amendment’s narrow unprotected categories. And the government can prosecute people for physically interfering with ICE operations or assaulting an officer. For example, physically blocking an officer from entering a government building, or standing in front of an officer’s vehicle to prevent it from moving. Speech or expressive conduct could be grounds for charges only if they rise to the level of, say, constitutionally unprotected incitement or the obstructive conduct described here.
In 1969, the Supreme Court held in Brandenburg v. Ohio that prosecutions for incitement based on speech are constitutional only where “advocacy is directed to inciting or producing imminent lawless action that is likely to occur.” That’s a high bar, and for good reason. Eyes Up and Chicagoland don’t come close to it.
In fact, Chicagoland moderators actively removed content that even suggested violence. Eyes Up didn’t provide real-time location data, and its moderators individually approved each video before posting. That’s not calling for violence at all, let alone imminent violence.
You’re right, Archduke Food Baby: The First Amendment doesn’t literally say “you can record law enforcement.” But as with other constitutional rights, the First Amendment’s meaning doesn’t just come from bare words on the page. It comes from the underlying intent to protect speech — and speakers from government overreach — as articulated by court decisions giving those words constitutional force. Those decisions are clear: Every federal appeals court to address the issue has recognized a First Amendment right to record government officials like police engaged in their duties in public.
It bears repeating: The government can prosecute people for physically interfering with ICE operations or assaulting an officer. But the government can’t ban lawful tools just because someone else could (or did) use that tool to commit a crime. If a person uses Google maps to find an ICE facility and vandalize it, the government couldn’t just shut down Google maps. When we’re talking about speech, the First Amendment doesn’t bend the knee — even if that makes law enforcement more difficult because officers have to take constitutional interests into account.
This case isn’t about blowing whistles or blocking roads. It’s about the government pressuring private companies into censoring protected speech on private platforms.
That said, the First Amendment protects sounds. While DarkTechObserver might think that blowing a whistle isn’t “criticism,” the First Amendment doesn’t let the government censor expression based on the content of that expression — or the method used to convey it, whether it’s a pen or a whistle. Of course, depending on the context, the government can place reasonable content- and viewpoint-neutral limits on how loud you can be in public. It just can’t base those decisions on the content of what you’re saying.
Besides, there’s a difference between physically preventing law enforcement from operating and simply being loud at a protest. Whistles themselves are not obstruction, and as we explain above, the government can’t ban a tool just because that tool can be used in potentially illegal ways.
These are fact-specific questions that are difficult to answer in the abstract. But one thing is certain: We don’t just take the government’s word for it that a line has been crossed. The First Amendment requires the government to prove that a whistle, a bullhorn, or even people standing in the road, mean to prevent carrying out lawful law enforcement activity.
Yes. FIRE was very critical of the Biden administration’s jawboning efforts. Jawboning is just censorship by proxy: If it’s illegal for the government to censor certain speech directly, then using a middleman doesn’t change a thing. As nonpartisan free speech defenders, we have, and always will, call out censorship by both sides.
In 2021, officials under President Biden pressured social media companies to take down COVID-related posts in the name of public health — a classic case of jawboning. FIRE filed an amicus brief in the ensuing case, Murthy v. Missouri, arguing that the Biden administration violated the First Amendment by attempting to interfere with private content moderation. The Court ultimately held that the plaintiffs didn’t have the ability to sue because they couldn’t prove the government’s actions were the direct cause of the harm to the plaintiffs’ speech. But Justice Amy Coney Barrett, writing for the majority, suggested that the government’s actions would have been unconstitutional had the plaintiffs been able to show the government directly caused the speech restrictions.
FIRE also filed an amicus brief on the National Rifle Association’s behalf in the Supreme Court’s 2024 case NRA v. Vullo. The Vullo Court held that a New York state official violated the NRA’s constitutional rights by threatening regulatory enforcement against banks and insurance companies that did business with the gun rights group. Jawboning was wrong then, and it’s wrong now.
Overall verdict: Compared with the TEF 2023 Data Dashboard, the latest one handles more like the dash of a modern EV. Go steadily at first and have a play, safe in the knowledge that you aren’t going to unnecessarily damage the respectable vehicle you are looking at in the 360-degree camera. The new TEF Data Dashboard is a very powerful instrument and at first look it is clear and intuitive. But go more deeply into the data using the Filters for example, you will need to go from no experience needed to more expert skill.
The fundamental of maintaining and improving the student experience is data. Data, in the form of statistically benchmarked indicators, inform understanding, and in the right hands, direct strategic delivery. Universities are keen on institutional big interventions, often running several at the same time, sometimes without clarity gained from what data are telling them. Often, these aren’t evaluated well, don’t work or run into the sand. Staff become cynical, students either unaware or confused. Benchmarked data help prioritisation and selection for effective delivery. Reliable Data Dashboards are essential.
And, therefore, for those who love data and understanding competitor positions, the newly TEF Data Dashboard, launched this week, is essential to integrated quality and improvement.
I tested it in TEF Panel member mode and looked at the data for a variety of Providers whose indicators and performance I had known in TEF 2023. This included universities (low to high tariff institutions), colleges (FEC, private providers) and specialists. I thought about it as I would if I were assessing the TEF performance of higher education provider across Student Experience and Outcomes. And from the perspective of a provider wishing to understand their data over the Time Series.
For both functions, the improved power and handling are very welcome. Start as you would have done in the 2023 dashboard by choosing either Experience or Outcomes and you are presented with a series of deceptively simple tabs. The Overall Experience tab presents performance across basic sections of the National Student Survey by Measure (e.g. Teaching on My Course, Assessment and Feedback) and Mode (e.g. Full Time, Part-Time, Degree Apprenticeship).
Gone are the complicated illustrations of overall distribution and variability, showing central tendencies alongside spread of results. Instead, data are simply numerical and colour coded. They reference statistical confidence of whether a result is materially above, broadly in line or materially below, benchmark, and it’s extremely easy to evaluate how an institution is doing overall. No judgment needed, the Dashboard does it for you. It will be extremely helpful for providers, reviewers and student representatives alike.
It gets better. To really understand and to be able to improve performance, one of the most crucial elements is to know how consistent the overall findings are, by measure (NSS) and mode (FT, PT etc) and Time. Unless an institution can drill down to its Split benchmarked data – how provider subjects and student groups are doing over time – it’s impossible to get a grip on understanding what’s going well and what isn’t and to work out how to improve performance.
The Split Consistency tab provides you with that information at a glance. Still welcome is the Partnership Split which gives clear guidance about performance of partnership students, with franchised degrees, for example.
The Split overview display focuses on the performance of the Splits, making it easier to see whether and by how much data are inconsistent. Take Teaching on my Course for a particular provider as an example. Imagine it is overall Broadly in Line with benchmark (marked with a black circle). How much are subgroups consistent with this performance? The display gives you an immediate answer: consistent with the overall pattern (broadly in line in this case) will be in a blank cell, inconsistent will be in other cells, above or below benchmark. It is therefore easy to see whether performance of full-time students across different subgroups is consistent or inconsistent for Teaching on my Course.
Of course, this was more than possible to do in previous dashboards, but it required manual search and some degree of judgement – was there an inconsistent pattern and did appear to be material? The Split Consistency tab does the work for you.
The next tab gives you the Detailed View through which you can explore overall NSS data and Splits in even more depth. It is at this point when Dashboard use requires a bit more expertise to use. Filters are available to select and search in more detail, allowing drill down to understand inconsistent performance (e.g. across full-time and part-time, or degree apprenticeships, by subject and by student group).
Back to the Overview page and Outcomes. You are now in the territory of Continuation, Completion and Progression, benchmarked and presented across modes of full-time, part-time, degree apprenticeships. Again Indicators (e.g. % Positive) are presented much more clearly than in the previous dashboard, and statistical confidence about materiality of difference against benchmark is given in percentage terms and colour coded. The data are also presented in a Split Consistency tab and in Detailed View, including partnership information. Whether performance is probably below B3 Threshold is usefully colour coded, and information includes that about Graduate Outcomes quintile.
The B3 Thresholds tab will be invaluable for Provider planners, giving an immediate view of whether performance is in line with, above or below B3 thresholds, colour coded, for Continuation, Completion and Progression. Data are there by mode, level and splits: time series, taught by provider, course type, subject and student groups.
One useful element of the 2023 TEF Data Dashboard was the ability to search (albeit manually) whether performance of an individual student group or an individual subject area is materially inconsistent across different categories of the NSS or over time: Is the performance of your Business course materially below benchmark across the various sections of the NSS, or over time, or for particular student groups. This information can be found through filtering in Detailed View – easy-peasy when you construct the right filters but requiring careful thought.
Interpreting data does require planning analytics resource and expertise. Carried out well the new dashboard will be invaluable in helping to better understand and crystallise the areas that are doing well and those that need real attention to improve.
I was concerned at the disappearance at the turn of the year of the 2024 TEF Data Dashboard. No longer would you be able to look across 2023 and 2024 to chart performance, assess trajectory and construct your narrative. However, the latest TEF Data Dashboard gives you the right time series to track whether your metrics journey is improving or worsening, to tell your story, so 2024 is arguably no longer needed.
A minor gripe: Perhaps replace Red /Green colour coding with the yellow, blue option for those who are red/green insensitive? And, substantively, it would be useful to have an Overview tab that summates Experience and Outcomes.
In conclusion: the new TEF Data Dashboard is a great innovation. It’s easy to use, even fun (for geeks like me), absorbing and intuitive in parts. It’s fully functional, ensuring initial use is not hampered by lack of expertise. But using Filters and Splits will require some practice and training. This data-driven improvement Dashboard is a great rollout.
This blog was originally published on 23rd February and has since been updated.
As a student, I always pictured myself teaching, writing on chalkboards, mentoring students and having the freedom to pursue intellectual curiosity. I imagined this career long before I knew what a salary model was and before salary ever entered the picture. Many faculty share this story, as we did not choose academia because it promised financial rewards. We chose it as we believe in the value of higher education and value the opportunity to make meaningful contributions toward knowledge production.
A recent report from the American Association of University Professors found that, despite a 3.8 percent increase in full-time faculty salaries in fall 2024, inflation-adjusted compensation remained 6.2 percent below pre-pandemic levels. The same report reveals that growth in salaries for university presidents has outpaced the growth in faculty salaries, with the median salaries of university presidents ranging from $268,000 at public associate-granting institutions to more than $900,000 at private doctoral universities. These findings, taken together, underscore a broader, systemic concern: While institutions claim to invest in people, many faculty, having seen their wages decline in real terms, feel left behind.
As a statistician who served for more than a decade on the faculty compensation committee and the priorities and budget committee at my institution, I have seen the evolution of salary models from the inside. A rule of thumb in the world of statistical modeling is that if all elements of a model are not visible and you cannot reproduce a model, you cannot trust it. As an expert in modeling, I can say without a doubt that faculty salary models fail the basic tests of clarity and reproducibility.
As faculty try to understand which critical features determine the numbers on their paychecks, we realize that salary models are often unable to provide clear answers because they evolved piecemeal over decades, are shaped by constraints and ultimately represent an accumulation of ad hoc decisions, resulting in inequitable and complex systems that only a few people can fully understand. Some senior faculty may recall fragments of past rationale, but there are no clear explanations. And the newer faculty inherit a system shaped by decisions and compromises made long ago. Over time, we see the final number but not the logic behind it, and faculty salary models feel like black boxes.
The most pressing problems regarding salary models are that they are opaque, overly complex, poorly researched and disconnected from institutional values. It is no wonder then that each year, especially as colleges and universities start counting their applicants and estimating the yield for incoming students, conversations about faculty salaries spark a mix of confusion and concern at a national level.
A salary model need not be simple, but it should be understandable, including by:
Having clear definitions and an explicit role in terms of factoring in rank, years of service, years in a rank, discipline, merit, etc.
Having explicit information about how rank, years of service, years in a rank, discipline, merit and market factors (such as peer benchmarking, geographical cost of labor, retention pressures, inflation and demand for a discipline) interact.
Having transparent formulas to reproduce salary calculations.
Having a principled design that is equity-centered and values retention, competitiveness and compression adjustments.
Having predictability so the faculty can anticipate how future performance will impact compensation.
Once all elements of the salary model are defined, discussions can shift from suspicion of hidden decision-making to questions that highlight institutional values. What does a balance in teaching, research and service look like? How can we identify and address disparities across gender, race, rank or discipline? Should merit adjustments be part of the model? How should market factors be adjusted for? What trade-offs are we willing to make as an institution under fiscal constraints? These are governance questions, not modeling mysteries.
To move this conversation forward responsibly, we must also acknowledge the administrative perspective. Transparency is often framed as a demand placed on administrators—a demand that they are often perceived as not having met, or of not having a desire to meet. Having served alongside administrators in budget deliberations, I understand there is a constant tug-of-war between finite resources and competing priorities. There are many valid budgetary constraints, such as uncertain enrollments, inflation, deferred maintenance and the costs of maintaining academic excellence and student support. By clearly sharing the competing priorities and long-term planning considerations, the administration can invite faculty into a constructive dialogue so both sides can work together to find sustainable and equitable solutions.
Transparency is a powerful way to restore a sense of alignment between faculty work and institutional mission, and to build a community committed to clarity, equity and shared purpose. Faculty must articulate what values they want in a model, and administration must share data, assumptions and constraints. Predictability helps faculty plan their careers; understanding how salary evolves can make them feel more secure. If faculty feel secure and confident in how their pay is determined and if administrations openly communicate the financial constraints they face, the conversation moves from “How was this number generated?” to “Is this the model we want for our community?”
As higher education faces financial pressures and intense public scrutiny, this is a call to open the black box of faculty salary models. This begins with a commitment to openness, but it also requires a culture shift. We must stop treating faculty salary structures as overly technical and recognize them as central to equity, morale and long-term institutional sustainability. By doing this, we not only build confidence in the process but also honor the intellectual and professional contributions of faculty members. In this sense, a transparent salary model becomes a living document that reflects institutional priorities, making compensation more than a number—it can become a statement of shared values.
Priya Kohli is a professor of statistics and chair of the Department of Mathematics and Statisticsat Connecticut College.