Category: Standards

  • The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The Office for Students steps on to shaky ground in an attempt to regulate academic standards

    The funny thing about the story about today’s intervention by the Office for Students is that it is not really about grade inflation, or degree algorithms.

    I mean, it is on one level: we get three investigation reports on providers related to registration condition B4, and an accompanying “lessons learned” report that focuses on degree algorithms.

    But the central question is about academic standards – how they are upheld, and what role an arm of the government has in upholding them.

    And it is about whether OfS has the ability to state that three providers are at “increased risk” of breaching a condition of registration on the scant evidence of grade inflation presented.

    And it is certainly about whether OfS is actually able to dictate (or even strongly hint at its revealed preferences on) the way degrees are awarded at individual providers, or the way academic standards are upheld.

    If you are looking for the rule book

    Paragraph 335N(b) of the OfS Regulatory Framework is the sum total of the advice it has offered before today to the sector on degree algorithms.

    The design of the calculations that take in a collection of module marks (each assessed carefully against criteria set out in the module handbook, and cross-checked against the understanding of what expectations of students should be offered by an academic from another university) into an award of a degree at a given classification is a potential area of concern:

    where a provider has changed its degree classification algorithm, or other aspects of its academic regulations, such that students are likely to receive a higher classification than previous students without an increase in their level of achievement.

    These circumstances could potentially be a breach of condition of registration B4, which relates to “Assessment and Awards” – specifically condition B4.2(c), which requires that:

    academic regulations are designed to ensure that relevant awards are credible;

    Or B4.2(e), which requires that:

    relevant awards granted to students are credible at the point of being granted and when compared to those granted previously

    The current version of condition B4 came into force in May 2022.

    In the mighty list of things that OfS needs to have regard to that we know and love (section 2 of the 2017 Higher Education and Research Act), we learn that OfS has to pay mind to “the need to protect the institutional autonomy of English higher education providers” – and, in the way it regulates that it should be:

    Transparent, accountable, proportionate, and consistent and […] targeted only at cases where action is needed

    Mutant algorithms

    With all this in mind, we look at the way the regulator has acted on this latest intervention on grade inflation.

    Historically the approach has been one of assessing “unexplained” (even once, horrifyingly, “unwarranted”) good honours (1 or 2:1) degrees. There’s much more elsewhere on Wonkhe, but in essence OfS came up with its own algorithm – taking into account the degrees awarded in 2010-11 and the varying proportions students in given subject areas, with given A levels and of a given age – that starts from the position that non-traditional students shouldn’t be getting as many good grades as their (three good A level straight from school) peers, and if they did then this was potentially evidence of a problem.

    To quote from annex B (“statistical modelling”) of last year’s release:

    “We interact subject of study, entry qualifications and age with year of graduation to account for changes in awarding […] our model allows us to statistically predict the proportion of graduates awarded a first or an upper second class degree, or a first class degree, accounting for the effects of these explanatory variables.

    When I wrote this up last year I did a plot of the impact each of these variables is expected to have on – the fixed effect coefficient estimates show the increase (or decrease) in the likelihood of a person getting a first or upper second class degree.

    [Full screen]

    One is tempted to wonder whether the bit of OfS that deals with this issue ever speaks to the bit that is determined to drive out awarding gaps based on socio-economic background (which, as we know, very closely correlates with A level results). This is certainly one way of explaining why – if you look at the raw numbers – the people who award more first class and 2:1 degrees are the Russell Group, and at small selective specialist providers.

    [Full screen]

    Based on this model (which for 2023-24 failed to accurately predict fully fifty per cent of the grades awarded) OfS selected – back in 2022(!) – three providers where it felt that the “unexplained” awards had risen surprisingly quickly over a single year.

    What OfS found (and didn’t find)

    Teesside University was not found to have ever been in breach of condition B4 – OfS was unable to identify statistically significant differences in the proportion of “good” honours awarded to a single cohort of students if it applied each of the three algorithms Teesside has used over the past decade or so. There has been – we can unequivocally say – no evidence of artificial grade inflation at Teesside University.

    St Mary’s University, Twickenham and the University of West London were found to have historically been in breach of condition B4. The St Mary’s issue related to an approach that was introduced in 2016-17 and was replaced in 2021-22, in West London the offending practice was introduced in 2015-16 and replaced in 2021-22. In both cases, the replacement was made because of an identified risk of grade inflation. And for each provider a small number of students may have had their final award calculated using the old approach since 2021-22, based on a need to not arbitrarily change an approach that students had already been told about.

    To be clear – there is no evidence that either university has breached condition B4 (not least because condition B4 came into force after the offending algorithms had been replaced). In each instance the provider in question has made changes based on the evidence it has seen that an aspect of the algorithm is not having the desired effect, exactly the way in which assurance processes should (and generally do) work.

    Despite none of the providers in question currently being in breach of B4 all three are now judged to be at an increased risk of breaching condition B4.

    No evidence has been provided as to why these three particular institutions are at an “increased risk” of a breach while others who may use substantially identical approaches to calculating final degree awards (but have not been lucky enough to undergo an OfS inspection on grade inflation) are not. Each is required to conduct a “calibration exercise” – basically a review of their approach to awarding undergraduate degrees of the sort each has already completed (and made changes based on) in recent years.

    Vibes-based regulation

    Alongside these three combined investigation/regulatory decision publications comes a report on Batchelors’ degree classification algorithms. This purports to set out the “lessons learned” from the three reports, but it actually sets up what amounts to a revision to condition B4.

    We recognise that we have not previously published our views relating to the use of algorithms in the awarding of degrees. We look forward to positive engagement with the sector about the contents of this report. Once the providers we have investigated have completed the actions they have agreed to undertake, we may update it to reflect the findings from those exercises.

    The important word here is “views”. OfS expresses some views on the design of degree algorithms, but it is not the first to do so and there are other equally valid views held by professional bodies, providers, and others – there is a live debate and a substantial academic literature on the topic. Academia is the natural home of this kind of exchange of views, and in the crucible of scholarly debate evidence and logical consistency are winning moves. Having looked at every algorithm he could find, Jim Dickinson covers the debates over algorithm characteristics elsewhere on the site.

    It does feel like these might be views expressed ahead of a change to condition B4 – something that OfS does have the power to do, but would most likely (in terms of good regulatory practice, and the sensitive nature of work related to academic standards managed elsewhere in the UK by providers themselves) be subject to a full consultation. OfS is suggesting that it is likely to find certain practices incompatible with the current B4 requirements – something which amounts to a de facto change in the rules even if it has been done under the guise of guidance.

    Providers are reminded that (as they are already expected to do) they must monitor the accuracy and reliability of current and future degree algorithms – and there is a new reportable event: providers need to tell OfS if they change their algorithm that may result in an increase of “good” honours degrees awarded.

    And – this is the kicker – when they do make these changes, the external calibration they do cannot relate to external examiner judgements. The belief here is that external examiners only ever work at a module level, and don’t have a view over an entire course.

    There is even a caveat – a provider might ask a current or former external examiner to take an external look at their algorithm in a calibration exercise, but the provider shouldn’t rely solely on their views as a “fresh perspective” is needed. This reads back to that rather confusing section of the recent white paper about “assessing the merits of the sector continuing to use the external examiner system” while apparently ignoring the bit around “building the evidence base” and “seeking employers views”.

    Academic judgement

    Historically, all this has been a matter for the sector – academic standards in the UK’s world-leading higher education sector have been set and maintained by academics. As long ago as 2019 the UK Standing Committee for Quality Assessment (now known as the Quality Council for UK Higher Education) published a Statement of Intent on fairness in degree classification.

    It is short, clear and to the point: as was then the fashion in quality assurance circles. Right now we are concerned with paragraph b, which commits providers to protecting the value of their degrees by:

    reviewing and explaining how their process for calculating final classifications, fully reflect student attainment against learning criteria, protect the integrity of classification boundary conventions, and maintain comparability of qualifications in the sector and over time

    That’s pretty uncontroversial, as is the recommended implementation pathway in England: a published “degree outcomes statement” articulating the results of an internal institutional review.

    The idea was that these statements would show the kind of quantitative trends that OfS get interested in, some assurance that these institutional assessment processes meet the reference points, and reflect the expertise and experience of external examiners, and provide a clear and publicly accessible rationale for the degree algorithm. As Jim sets out elsewhere, in the main this has happened – though it hasn’t been an unqualified success.

    To be continued

    The release of this documentation prompts a number of questions, both on the specifics of what is being done and more widely on the way in which this approach does (or does not) constitute good regulatory practice.

    It is fair to ask, for instance, whether OfS has the power to decide that it has concerns about particular degree awarding practices, even where it is unable to point to evidence that these practices are currently having a significant impact on degrees awarded, and to promote a de facto change in interpretation of regulation that will discourage their use.

    Likewise, it seems problematic that OfS believes it has the power to declare that the three providers it investigated are at risk of breaching a condition of registration because they have an approach to awarding degrees that it has decided that it doesn’t like.

    It is concerning that these three providers have been announced as being at higher risk of a breach when other providers with similar practices have not. It is worth asking whether this outcome meets the criteria for transparent, accountable, proportionate, and consistent regulatory practice – and whether it represents action being targeted only at cases where it is demonstrably needed.

    More widely, the power to determine or limit the role and purpose of external examiners in upholding academic standards has not historically been one held by a regulator acting on behalf of the government. The external examiner system is a “sector recognised standard” (in the traditional sense) and generally commands the confidence of registered higher education providers. And it is clearly a matter of institutional autonomy – remember in HERA OfS needs to “have regard to” institutional autonomy over assessment, and it is difficult to square this intervention with that duty.

    And there is the worry about the value and impact of sector consultation – an issue picked up in the Industry and Regulators Committee review of OfS. Should a regulator really be initiating a “dialogue with the sector” when its preferences on the external examiner system are already so clearly stated? And it isn’t just the sector – a consultation needs to ensure that the the views of employers (and other stakeholders, including professional bodies) are reflected in whatever becomes the final decision.

    Much of this may become clear over time – there is surely more to follow in the wider overhaul of assurance, quality, and standards regulation that was heralded in the post-16 white paper. A full consultation will help centre the views of employers, course leaders, graduates, and professional bodies – and the parallel work on bringing the OfS quality functions back into alignment with international standards will clearly also have an impact.

    Source link

  • The advantages of supplementing curriculum

    The advantages of supplementing curriculum

    Key points:

    Classroom teachers are handed a curriculum they must use when teaching. That specific curriculum is designed to bring uniformity, equity, and accountability into classrooms. It is meant to ensure that every child has access to instruction that is aligned with state standards. The specific curriculum provides a roadmap for instruction, but anyone who has spent time in a classroom knows that no single curriculum can fully meet the needs of every student.

    In other words, even the most carefully designed curriculum cannot anticipate the individual needs of every learner or the nuances of every classroom. This is why supplementing curriculum is a vital action that skilled educators engage in. Supplementing curriculum does not mean that teachers are not teaching the required curriculum. In fact, it means they are doing even more to ensure student success.

    Students arrive with different strengths, challenges, and interests. Supplementing curriculum allows teachers to bridge inevitable gaps within their students.  For example, a math unit may assume fluency with multiplying and dividing fractions, but some students may not recall that skill, while others are ready to compute with mixed numbers. With supplementary resources, a teacher can provide both targeted remediation and enrichment opportunities. Without supplementing the curriculum, one group may fall behind or the other may become disengaged.

    Supplementing curriculum can help make learning relevant. Many curricula are written to be broad and standardized. Students are more likely to connect with lessons when they see themselves reflected in the content, so switching a novel based on the population of students can assist in mastering the standard at hand.   

    Inclusion is another critical reason to supplement. No classroom is made up of one single type of learner. Students with disabilities may need graphic organizers or audio versions of texts. English learners may benefit from bilingual presentations of material or visual aids. A curriculum may hit all the standards of a grade, but cannot anticipate the varying needs of students. When a teacher intentionally supplements the curriculum, every child has a pathway to success.

    Lastly, supplementing empowers teachers. Teaching is not about delivering a script; it is a profession built on expertise and creativity. When teachers supplement the prescribed curriculum, they demonstrate professional judgment and enhance the mandated framework. This leads to a classroom where learning is accessible, engaging, and responsive.

    A provided curriculum is the structure of a car, but supplementary resources are the wheels that let the students move. When done intentionally, supplementing curriculum enables every student to be reached. In the end, the most successful classrooms are not those that follow a book, but those where teachers skillfully use supplementary curriculum to benefit all learners. Supplementing curriculum does not mean that a teacher is not using the curriculum–it simply means they are doing more to benefit their students even more.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What it really takes to lead successful grading reform

    What it really takes to lead successful grading reform

    This post originally appeared on the Otus blog and is republished here with permission.

    Grading reform is messy, but it’s worth it.

    That was the central message from Jessica Espinoza and Alice Opperman of Emerson Public Schools (NJ), who shared their decade-long journey implementing standards-based grading during their session at ISTELive+ASCD 2025. 

    What started as a deeply rooted effort to promote equity has grown into a districtwide, cross-curricular system that blends teacher voice, clarity for families, and support from the right tools.

    Here’s what they learned along the way, and why they’re still learning.

    huge takeaways for school leaders considering a shift to SBG

    Clarity starts with fewer, better standards

    In the early stages of their grading reform, Emerson tried to be comprehensive; too comprehensive, perhaps. Their first report card included nearly every New Jersey Common Core standard, which quickly became overwhelming for both teachers and families. Over time, they shifted to focusing on broader, more meaningful standards that better reflected student learning.

    “So approximately 10 years ago, we started with a standard-based report card in grades K-6. Our report card at that time listed pretty much every standard we could think of. We realized that we really needed to narrow in on more umbrella standards or standards that really encapsulate the whole idea. We took away this larger report card with 50 different standards, and we went into something that was more streamlined. That really helped our teachers to focus their energy on what is really important for our students.” 
    –Jessica Espinoza, Principal, Emerson Public Schools (NJ)

    Lasting change doesn’t happen without teacher buy-in

    Grading reform can’t succeed unless educators believe in it. That’s why Emerson made intentional space for teacher voice throughout the process; through pilots, surveys, honest conversations, and, most importantly, time. The district embraced a long-term mindset, giving teachers flexibility to experiment, reflect, and gradually evolve their practices instead of expecting instant transformation.

    “We had some consultants sit with teams of teachers to work on these common scoring criteria. They were fully designed by teachers, and their colleagues had the chance to weigh in during the school year so that it didn’t feel quite so top-down…the teachers had such a voice in making them that it didn’t feel like we were taking their autonomy away.”
    –Alice Opperman, Director of Curriculum, Instruction & Technology, Emerson Public Schools (NJ)

    Progress means nothing if families can’t follow it

    Even with teachers aligned and systems in place, Emerson found that family understanding was key to making SBG truly work. While the district initially aimed to move away from traditional letter grades altogether, ongoing conversations with parents led to a reevaluation. By listening to families and adapting their approach, Emerson has found a middle ground, one that preserves the value of standards-based learning while making progress easier for families to understand.

    “Five years ago, I would have said, ‘We will be totally done with points. We will never see a letter grade again. It’s going to be so much better.’ But talking to parent after parent has led us to this compromised place where we are going to try it a little bit differently to give the parents what they need in order to understand us, but also keep that proficiency, competency, mastery information that we feel is so valuable as educators.” 
    –Alice Opperman, Director of Curriculum, Instruction & Technology, Emerson Public Schools (NJ)

    Still evolving, and that’s the point

    For Jessica and Alice, grading reform has never been about arriving at a perfect system (and certainly not achieving it overnight). It’s been about listening, learning, and improving year after year. Their message to other school leaders? There’s no one “right” way to do SBG, but there is a thoughtful, collaborative way forward.

    Emerson’s story shows that when you prioritize clarity, trust your teachers, and bring families into the conversation, the result isn’t just a better report card. 

    It’s a better learning experience for everyone involved.

    How the right grading solution supports Emerson’s SBG efforts

    Emerson put in the work, but sustaining grading reform at scale is nearly impossible without the right tools to support teachers, track progress, and communicate effectively with families.

    • Streamlined standards
      Focus on the standards that matter most by building custom, district-aligned grading scales. The right platform makes it easy to group standards, apply scoring criteria, and visualize mastery over time.
    • Transparent communication
      Share clear, standards-aligned feedback with families directly in a platform. Teachers can provide timely updates, rubric explanations, and progress reports, all in one place.
    • Flexible grading tools
      Support teacher autonomy with multiple assessment types and scoring options, including points, rubrics, and mastery levels, all aligned to district-defined standards.

    For more news on grading reform, visit eSN’s Educational Leadership hub.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • What the experience of neurodivergent PhD students teaches us, and why it makes me angry

    What the experience of neurodivergent PhD students teaches us, and why it makes me angry

    by Inger Mewburn

    Recently, some colleagues and I released a paper about the experiences of neurodivergent PhD students. It’s a systematic review of the literature to date, which is currently under review, but available via pre-print here.

    Doing this paper was an exercise in mixed feelings. It was an absolute joy to work with my colleagues, who knew far more about this topic than me and taught me (finally!) how to do a proper systematic review using Covidence. Thanks Dr Diana TanDr Chris EdwardsAssociate Professor Kate SimpsonAssociate Professor Amanda A Webster and Professor Charlotte Brownlow (who got the band together in the first place).

    But reading each and every paper published about neurodivergent PhD students provoked strong feelings of rage and frustration. (These feelings only increased, with a tinge of fear added in, when I read of plans for the US health department to make a ‘list’ of autistic people?! Reading what is going on there is frankly terrifying – solidarity to all.) We all know what needs to be done to make research degrees more accessible. Make expectations explicit. Create flexible policies. Value diverse thinking styles. Implement Universal Design Principles… These suggestions appear in report after report, I’ve ranted on the blog here and here, yet real change remains frustratingly elusive. So why don’t these great ideas become reality? Here’s some thoughts on barriers that keep neurodivergent-friendly changes from taking hold.

    The myth of meritocracy

    Academia clings to the fiction that the current system rewards pure intellectual merit. Acknowledging the need for accessibility requires admitting that the playing field isn’t level. Many senior academics succeeded in the current system and genuinely believe “if I could do it, anyone can… if they work hard enough”. They are either 1) failing to recognise their neurotypical privilege, or 2) not acknowledging the cost of masking their own neurodivergence (I’ll get to this in a moment).

    I’ve talked to many academics about things we could do – like getting rid of the dissertation – but too many of us are secretly proud of our own trauma. The harshness of the PhD has been compared to a badge of honour that we wear proudly – and expect others to earn.

    Resource scarcity (real and perceived)

    Universities often respond to suggestions about increased accessibility measures with budget concerns. The vibe is often: “We’d love to offer more support, but who will pay for it?”. However, many accommodations (like flexible deadlines or allowing students to work remotely) cost little, or even nothing. Frequently, the real issue isn’t resources but priorities of the powerful. There’s no denying universities (in Australia, and elsewhere) are often cash strapped. The academic hunger games are real. However, in the fight for resources, power dynamics dictate who gets fed and who goes without.

    I wish we would just be honest about our choices – some people in universities still have huge travel budgets. The catering at some events is still pretty good. Some people seem to avoid every hiring freeze. There are consistent patterns in how resources are distributed. It’s the gaslighting that makes me angry. If we really want to, we can do most things. We have to want to do something about this.

    Administrative inertia

    Changing established processes in a university is like turning a battleship with a canoe paddle. Approval pathways are long and winding. For example, altering a single line in the research award rules at ANU requires approval from parliament (yes – the politicians actually have to get together and vote. Luckily we are not as dysfunctional in Australia as other places… yet). By the time a solution is implemented, the student who needed it has likely graduated – or dropped out. This creates a vicious cycle where the support staff, who see multiple generations of students suffer the same way, can get burned out and stop pushing for change.

    The individualisation of disability

    Universities tend to treat neurodivergence as an individual problem requiring individual accommodations rather than recognising systemic barriers. This puts the burden on students to disclose, request support, and advocate for themselves – precisely the executive function and communication challenges many neurodivergent students struggle with.

    It’s akin to building a university with only stairs, then offering individual students a piggyback ride instead of installing ramps. I’ve met plenty of people who simply get so exhausted they don’t bother applying for the accommodations they desperately need, and then end up dropping out anyway.

    Fear of lowering ‘standards’

    Perhaps the most insidious barrier is the mistaken belief that accommodations somehow “lower standards.” I’ve heard academics worrying that flexible deadlines will “give some students an unfair advantage” or that making expectations explicit somehow “spoon-feeds” students.

    The fear of “lowering standards” becomes even more puzzling when you look at how PhD requirements have inflated over time. Anyone who’s spent time in university archives knows that doctoral standards aren’t fixed – they’re constantly evolving. Pull a dissertation from the 1950s or 60s off the shelf and you’ll likely find something remarkably slim compared to today’s tomes. Many were essentially extended literature reviews with modest empirical components. Today, we expect multiple studies, theoretical innovations, methodological sophistication, and immediate publishability – all while completing within strict time limits on ever-shrinking funding.

    The standards haven’t just increased; they’ve multiplied. So when universities resist accommodations that might “compromise standards,” we should ask: which era’s standards are we protecting? Certainly not the ones under which most people supervising today had to meet. The irony is that by making the PhD more accessible to neurodivergent thinkers, we might actually be raising standards – allowing truly innovative minds to contribute rather than filtering them out through irrelevant barriers like arbitrary deadlines or neurotypical communication expectations. The real threat to academic standards isn’t accommodation – it’s the loss of brilliant, unconventional thinkers who could push knowledge boundaries in ways we haven’t yet imagined.

    Unexamined neurodiversity among supervisors

    Perhaps one of the most overlooked barriers is that many supervisors are themselves neurodivergent but don’t recognise it or acknowledge what’s going on with them! In fact, since starting this research, I’ve formed a private view that you almost can’t succeed in this profession without at least a little neurospicey.

    Academia tends to attract deep thinkers with intense focus on specific topics – traits often associated with autism (‘special interests’ anyone?). The contemporary university is constantly in crisis, which some people with ADHD can find provides the stimulation they need to get things done! Yet many supervisors have succeeded through decades of masking and compensating, often at great personal cost.

    The problem is not the neurodivergence or the supervisor – it’s how the unexamined neurodivergence becomes embedded in practice, underpinned by an expectation that their students should function exactly as they do, complete with the same struggles they’ve internalised as “normal.”

    I want to hold on to this idea for a moment, because maybe you recognise some of these supervisors:

    • The Hyperfocuser: Expects students to match their pattern of intense, extended work sessions. This supervisor regularly works through weekends on research “when inspiration strikes,” sending emails at 2am and expecting quick responses. They struggle to understand when students need breaks or maintain strict work boundaries, viewing it as “lack of passion.” Conveniently, they have ignored those couple of episodes of burn out, never considering their own work pattern might reflect ADHD or autistic hyper-focus, rather than superior work ethic.
    • The Process Pedant: Requires students to submit written work in highly specific formats with rigid attachment to particular reference styles, document formatting, and organisational structures. Gets disproportionately distressed by minor variations from their preferred system, focusing on these details over content, such that their feedback primarily addresses structural issues rather than ideas. I get more complaints about this than almost any other kind of supervision style – it’s so demoralising to be constantly corrected and not have someone genuinely engage with your work.
    • The Talker: Excels in spontaneous verbal feedback but rarely provides written comments. Expects students to take notes during rapid-fire conversational feedback, remembering all key points. They tend to tell you to do the same thing over and over, or forget what they have said and recommend something completely different next time. Can get mad when questioned over inconsistencies – suggesting you have a problem with listening. This supervisor never considers that their preference for verbal communication might reflect their own neurodivergent processing style, which isn’t universal. Couple this with a poor memory and the frustration of students reaches critical. (I confess, being a Talker is definitely my weakness as a supervisor – I warn my students in advance and make an effort to be open to criticism about it!).
    • The Context-Switching Avoider: Schedules all student meetings on a single day of the week, keeping other days “sacred” for uninterrupted research. Becomes noticeably agitated when asked to accommodate a meeting outside this structure, even for urgent matters. Instead of recognising their own need for predictable routines and difficulty with transitions (common in many forms of neurodivergence), they frame this as “proper time management” that students should always emulate. Students who have caring responsibilities suffer the most with this kind of inflexible relationship.
    • The Novelty-Chaser: Constantly introduces new theories, methodologies, or research directions in supervision meetings. Gets visibly excited about fresh perspectives and encourages students to incorporate them into already-developed projects. May send students a stream of articles or ideas completely tangential to their core research, expecting them to pivot accordingly. Never recognises that their difficulty maintaining focus on a single pathway to completion might reflect ADHD-related novelty-seeking. Students learn either 1) to chase butterflies and make little progress or 2) to nod politely at new suggestions while quietly continuing on their original track. The first kind of reaction can lead to a dangerous lack of progress, the second reaction can lead to real friction because, from the supervisor’s point of view, the student ‘never listens’. NO one is happy in these set ups, believe me.
    • The Theoretical Purist: Has devoted their career to a particular theoretical framework or methodology and expects all their students to work strictly within these boundaries. Dismisses alternative approaches as “methodologically unsound” or “lacking theoretical rigour” without substantive engagement. Becomes noticeably uncomfortable when students bring in cross-disciplinary perspectives, responding with increasingly rigid defences of their preferred approach. Fails to recognise their intense attachment to specific knowledge systems and resistance to integrating new perspectives may reflect autistic patterns of specialised interests, or even difficulty with cognitive flexibility. Students learn to frame all their ideas within the supervisor’s preferred language, even when doing so limits their research potential.

    Now that I know what I am looking for, I see these supervisory dynamics ALL THE TIME. Add in whatever dash of neuro-spiciness is going on with you and all kinds of misunderstandings and hurt feelings result … Again – the problem is not the neurodivergence of any one person – it’s the lack of self reflection, coupled with the power dynamics that can make things toxic.

    These barriers aren’t insurmountable, but honestly, after decades in this profession, I’m not holding my breath for institutional enlightenment. Universities move at the pace of bureaucracy after all.

    So what do we do? If you’re neurodivergent, find your people – that informal network who “get it” will save your sanity more than any official university policy. If you’re a supervisor, maybe take a good hard look at your own quirky work habits before deciding your student is “difficult.” And if you’re in university management, please, for the love of research, let’s work on not making neurodivergent students jump through flaming bureaucratic hoops to get basic support.

    The PhD doesn’t need to be a traumatic hazing ritual we inflict because “that’s how it was in my day.” It’s 2025. Time to admit that diverse brains make for better research. And for goodness sake, don’t put anyone on a damn list, ok?

    AI disclaimer: This post was developed with Claude from Anthropic because I’m so busy with the burning trash fire that is 2025 it would not have happened otherwise. I provided the concept, core ideas, detailed content, and personal viewpoint while Claude helped organise and refine the text. We iteratively revised the content together to ensure it maintained my voice and perspective. The final post represents my authentic thoughts and experiences, with Claude serving as an editorial assistant and sounding board.

    This blog was first published on Inger Mewburn’s  legendary website The Thesis Whisperer on 1 May 2025. It is reproduced with permission here.

    Professor Inger Mewburn is the Director of Researcher Development at The Australian National University where she oversees professional development workshops and programs for all ANU researchers. Aside from creating new posts on the Thesis Whisperer blog (www.thesiswhisperer.com), she writes scholarly papers and books about research education, with a special interest in post PhD employability, research communications and neurodivergence.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Becoming a professional services researcher in HE – making the train tracks converge

    Becoming a professional services researcher in HE – making the train tracks converge

    by Charlotte Verney

    This blog builds on my presentation at the BERA ECR Conference 2024: at crossroads of becoming. It represents my personal reflections of working in UK higher education (HE) professional services roles and simultaneously gaining research experience through a Masters and Professional Doctorate in Education (EdD).

    Professional service roles within UK HE include recognised professionals from other industries (eg human resources, finance, IT) and HE-specific roles such as academic quality, research support and student administration. Unlike academic staff, professional services staff are not typically required, or expected, to undertake research, yet many do. My own experience spans roles within six universities over 18 years delivering administration and policy that supports learning, teaching and students.

    Traversing two tracks

    In 2016, at an SRHE Newer Researchers event, I was asked to identify a metaphor to reflect my experience as a practitioner researcher. I chose this image of two train tracks as I have often felt that I have been on two development tracks simultaneously –  one building professional experience and expertise, the other developing research skills and experience. These tracks ran in parallel, but never at the same pace, occasionally meeting on a shared project or assignment, and then continuing on their separate routes. I use this metaphor to share my experiences, and three phases, of becoming a professional services researcher.

    Becoming research-informed: accelerating and expanding my professional track

    The first phase was filled with opportunities; on my professional track I gained a breadth of experience, a toolkit of management and leadership skills, a portfolio of successful projects and built a strong network through professional associations (eg AHEP). After three years, I started my research track with a masters in international higher education. Studying felt separate to my day job in academic quality and policy, but the assignments gave me opportunities to bring the tracks together, using research and theory to inform my practice – for example, exploring theoretical literature underpinning approaches to assessment whilst my institution was revising its own approach to assessing resits. I felt like a research-informed professional, and this positively impacted my professional work, accelerating and expanding my experience.

    Becoming a doctoral researcher: long distance, slow speed

    The second phase was more challenging. My doctoral journey was long, taking 9 years with two breaks. Like many part-time doctoral students, I struggled with balance and support, with unexpected personal and professional pressures, and I found it unsettling to simultaneously be an expert in my professional context yet a novice in research. I feared failure, and damaging my professional credibility as I found my voice in a research space.

    What kept me going, balancing the two tracks, was building my own research support network and my researcher identity. Some of the ways I did this was through zoom calls with EdD peers for moral support, joining the Society for Research into Higher Education to find my place in the research field, and joining the editorial team of a practitioner journal to build my confidence in academic writing.

    Becoming a professional services researcher: making the tracks converge

    Having completed my doctorate in 2022, I’m now actively trying to bring my professional and research tracks together. Without a roadmap, I’ve started in my comfort-zone, sharing my doctoral research in ‘safe’ policy and practitioner spaces, where I thought my findings could have the biggest impact. I collaborated with EdD peers to tackle the daunting task of publishing my first article. I’ve drawn on my existing professional networks (ARC, JISC, QAA) to establish new research initiatives related to my current practice in managing assessment. I’ve made connections with fellow professional services researchers along my journey, and have established an online network  to bring us together.

    Key takeaways for professional services researchers

    Bringing my professional experience and research tracks together has not been without challenges, but I am really positive about my journey so far, and for the potential impact professional services researchers could have on policy and practice in higher education. If you are on your own journey of becoming a professional services researcher, my advice is:

    • Make time for activities that build your research identity
    • Find collaborators and a community
    • Use your professional experience and networks
    • It’s challenging, but rewarding, so keep going!

    Charlotte Verney is Head of Assessment at the University of Bristol. Charlotte is an early career researcher in higher education research and a leader in within higher education professional services. Her primary research interests are in the changing nature of administrative work within universities, using research approaches to solve professional problems in higher education management, and using creative and collaborative approaches to research. Charlotte advocates for making the academic research space more inclusive for early career and professional services researchers. She is co-convenor of the SRHE Newer Researchers Network and has established an online network for higher education professional services staff engaged with research.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Gaps in sustainability literacy in non-STEM higher education programmes

    Gaps in sustainability literacy in non-STEM higher education programmes

    by Erika Kalocsányiová and Rania Hassan

    Promoting sustainability literacy in higher education is crucial for deepening students’ pro-environmental behaviour and mindset (Buckler & Creech, 2014; UNESCO, 1997), while also fostering social transformation by embedding sustainability at the core of the student experience. In 2022, our group received an SRHE Scoping Award to synthesise the literature on the development, teaching, and assessment of sustainability literacy in non-STEM higher education programmes. We conducted a multilingual systematic review of post-2010 publications from the European Higher Education Area (EHEA), with the results summarised in Kalocsányiová et al (2024).

    Out of 6,161 articles that we identified as potentially relevant, 92 studies met the inclusion criteria and are reviewed in the report. These studies involved a total of 11,790 participants and assessed 9,992 university programmes and courses. Our results suggest a significant growth in research interest in sustainability in non-STEM fields since 2017, with 75 studies published compared to just 17 in the preceding seven years. Our analysis also showed that Spain, the United Kingdom, Germany, Turkey, and Austria had the highest concentration of publications, with 25 EHEA countries represented in total. The 92 reviewed studies were characterised by high methodological diversity: nearly half employed quantitative methods (47%), followed by qualitative studies (40%) and mixed methods research (13%). Curriculum assessments using quantitative content analysis of degree and course descriptors were among the most common study types, followed by surveys and intervention or pilot studies. Curriculum assessments provided a systematic way to evaluate the presence or absence of sustainability concepts within curricula at both single HE institutions and in comparative frameworks. However, they often captured only surface-level indications of sustainability integration into undergraduate and postgraduate programmes, without providing evidence on actual implementation and/or the effectiveness of different initiatives. Qualitative methods, including descriptive case studies and interviews that focused on barriers, challenges, implementation strategies, and the acceptability of new sustainability literacy initiatives, made up 40% of the current research. Mixed methods studies accounted for 13% of the reviewed articles, often applying multiple assessment tools simultaneously, including quantitative sustainability competency assessment instruments combined with open-ended interviews or learning journals.

    In terms of disciplines, Economics, Business, and Administrative Studies held the largest share of reviewed studies (26%), followed by Education (23%). Multiple disciplines accounted for 22% of the reviewed publications, reflecting the interconnected nature of sustainability. Finance and Accounting contributed only 6%, indicating a need for further research. Similarly, Language and Linguistics, Mass Communication and Documentation, and Social Sciences collectively represented only 12% of the reviewed studies. Creative Arts and Design with just 2% was also a niche area. Although caution should be exercised when drawing conclusions from these results, they highlight the need for more research within the underrepresented disciplines. This in turn can help promote awareness among non-STEM students, stimulate ethical discussions on the cultural dimensions of sustainability, and encourage creative solutions through interdisciplinary dialogue.

    Regarding factors and themes explored, the studies focused primarily on the acquisition of sustainability knowledge and competencies (27%), curriculum assessment (23%), challenges and barriers to sustainability integration (10%), implementation and evaluation research (10%), changes in students’ mindset (9%), key competences in sustainability literacy (5%), and active student participation in Education for Sustainable Development (5%). In terms of studies discussing acquisition processes, key focus areas included the teaching of Sustainable Development Goals, awareness of macro-sustainability trends, and knowledge of local sustainability issues. Studies on sustainability competencies focussed on systems thinking, critical thinking, problem-solving skills, ethical awareness, interdisciplinary knowledge, global awareness and citizenship, communication skills, and action-oriented mindset. These competencies and knowledge, which are generally considered crucial for addressing the multifaceted challenges of sustainability (Wiek et al., 2011), were often introduced to non-STEM students through stand-alone lectures, workshops, or pilot studies involving new cross-disciplinary curricula.

    Our review also highlighted a broad range of pedagogical approaches adopted for sustainability teaching and learning within non-STEM disciplines. These covered case and project-based learning, experiential learning methods, problem-based learning, collaborative learning, reflection groups, pedagogical dialogue, flipped classroom approaches, game-based learning, and service learning. While there is strong research interest in the documentation and implementation of these pedagogical approaches, few studies have so far attempted to assess learning outcomes, particularly regarding discipline-specific sustainability expertise and real-world problem-solving skills.

    Many of the reviewed studies relied on single-method approaches, meaning valuable insights into sustainability-focused teaching and learning may have been missed. For instance, studies often failed to capture the complexities surrounding sustainability integration into non-STEM programs, either by presenting positivist results that require further contextualisation or by offering rich context limited to a single course or study group, which cannot be generalised. The assessment tools currently used also seemed to lack consistency, making it difficult to compare outcomes across programmes and institutions to promote best practices. More robust evaluation designs, such as longitudinal studies, controlled intervention studies, and mixed methods approaches (Gopalan et al, 2020; Ponce & Pagán-Maldonado, 2015), are needed to explore and demonstrate the pedagogical effectiveness of various sustainability literacy initiatives in non-STEM disciplines and their impact on student outcomes and societal change.

    In summary, our review suggests good progress in integrating sustainability knowledge and competencies into some core non-STEM disciplines, while also highlighting gaps. Based on the results we have formulated some questions that may help steer future research:

    • Are there systemic barriers hindering the integration of sustainability themes, challenges and competencies into specific non-STEM fields?
    • Are certain disciplines receiving disproportionate research attention at the expense of others?
    • How do different pedagogical approaches compare in terms of effectiveness for fostering sustainability literacy in and across HE fields?
    • What new educational practices are emerging, and how can we fairly assess them and evidence their benefits for students and the environment?

    We also would like to encourage other researchers to engage with knowledge produced in a variety of languages and educational contexts. The multilingual search and screening strategy implemented in our review enabled us to identify and retrieve evidence from 25 EHEA countries and 24 non-English publications. If reviews of education research remain monolingual (English-only), important findings and insights will go unnoticed hindering knowledge exchange, creativity, and innovation in HE.

    Dr. Erika Kalocsányiová is a Senior Research Fellow with the Institute for Lifecourse Development at the University of Greenwich, with research centering on public health and sustainability communication, migration and multilingualism, refugee integration, and the implications of these areas for higher education policies.

    Rania Hassan is a PhD student and a research assistant at the University of Greenwich. Her research centres on exploring enterprise development activities within emerging economies. As a multidisciplinary and interdisciplinary researcher, Rania is passionate about advancing academia and promoting knowledge exchange in higher education.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link