If you’ve ever left a lecture thinking “That didn’t land the way I hoped” (or “That went surprisingly well – why?”), you’ve already stepped into reflective teaching. The question is whether reflection remains a private afterthought … or becomes a deliberate practice that improves teaching in real time and shapes what we do next.
In Advancing pedagogical excellence through reflective teaching practice and adaptation I explored reflective teaching practice (RTP) in a first-year chemistry context at a New Zealand university, asking a deceptively simple question: How do lecturers’ teaching philosophies shape what they actually do to reflect and adapt their teaching?
What the study did
I interviewed eight chemistry lecturers using semi-structured interviews, then used thematic analysis to examine two connected strands: (1) teaching concepts/philosophy and (2) lecturer-student interaction. The paper distinguishes between:
Reflective Teaching (RT): the broader ongoing process of critically examining your teaching.
Reflective Teaching Practice (RTP): the day-to-day strategies (journals, feedback loops, peer dialogue, etc) that make reflection actionable.
Reflection is uneven and often unsystematic
A striking finding is that not all lecturers consistently engaged in reflective practices, and there wasn’t clear evidence of a shared, structured reflective culture across the teaching team. Some lecturers could articulate a teaching philosophy, but this didn’t always translate into a repeatable reflection cycle (before, during, and after teaching). I framed this using Dewey and Schön’s well-known reflection stages:
Reflection-for-action (before teaching): planning with intention
Reflection-in-action (during teaching): adjusting as it happens
Reflection-on-action (after teaching): reviewing to improve next time
Even where lecturers were clearly committed and experienced, reflection could still become fragmented, more like “minor tweaks” than a consistent, evidence-informed practice.
The real engine of reflection: lecturer-student interaction
Interaction isn’t just a teaching technique – it’s a reflection tool.
Student questions, live confusion, moments of silence, a sudden “Ohhh!” – these are data. In the study, the clearest examples of reflection happening during teaching came from lecturers who intentionally built in interaction (eg questioning strategies, pausing for problem-solving).
One example stands out: Denise’s in-class quiz is described as the only instance that embodied all three reflection components using student responses to gauge understanding, adapting support during the activity, and feeding insights forward into later planning.
Why this matters right now in UK HE
UK higher education is navigating increasing diversity in student backgrounds, expectations, and prior learning alongside sharper scrutiny of teaching quality and inclusion. In that context, reflective teaching isn’t “nice-to-have CPD”; it’s a way of ensuring our teaching practices keep pace with learners’ needs, not just disciplinary content.
The paper doesn’t argue for abandoning lectures. Instead, it shows how reflective practice can help lecturers adapt within lecture-based structures especially through purposeful interaction that shifts students from passive listening toward more active/constructive engagement (drawing on engagement ideas such as ICAP).
Three “try this tomorrow” reflective moves (small, practical, high impact)
Plan one interaction checkpoint (not ten). Add a single moment where you must learn something from students (a hinge question, poll, mini-problem, or “explain it to a partner”). Use it as reflection-for-action.
Name your in-the-moment adjustment. When you pivot (slow down, re-explain, swap an example), briefly acknowledge it: “I’m noticing this is sticky – let’s try a different route.” That’s reflection-in-action made visible.
End with one evidence-based note to self. Not “Went fine.” Instead: “35% missed X in the quiz – next time: do Y before Z.” That’s reflection-on-action you can actually reuse.
Questions to spark conversation (for you or your teaching team)
Where does your teaching philosophy show up most clearly: content coverage, student confidence, relevance, or interaction?
Which “data” do you trust most: NSS/module evaluation, informal comments, in-class responses, attainment patterns and why?
If your programme is team-taught, what would a shared reflective framework look like in practice (so reflection isn’t isolated and inconsistent)?
If reflective teaching is the intention, this article is the nudge: make reflection visible, structured, and interaction-led, so adaptation becomes a habit, not a heroic one-off.
Dr Yetunde Kolajo is a Student Success Research Associate at the University of Kent. Her research examines pedagogical decision-making in higher education, with a focus on students’ learning experiences, critical thinking and decolonising pedagogies. Drawing on reflective teaching practice, she examines how inclusive and reflective teaching frameworks can enhance student success.
Teacher evaluations have been the subject of debate for decades. Breakthroughs have been attempted, but rarely sustained. Researchers have learned that context, transparency, and autonomy matter. What’s been missing is technology that enhances these at scale inside the evaluation process–not around it.
As an edtech executive in the AI era, I see exciting possibilities to bring new technology to bear on these factors in the longstanding dilemma of observing and rating teacher effectiveness.
At the most fundamental level, the goals are simple, just as they are in other professions: provide accountability, celebrate areas of strong performance, and identify where improvement is needed. However, K-12 education is a uniquely visible and important industry. Between 2000 and 2015, quality control in K-12 education became more complex, with states, foundations, and federal policy all shaping the definition and measurement of a “proficient” teacher.
For instance, today’s observation cycle might include pre- and post-observation conferences plus scheduled and unscheduled classroom visits. Due to the potential for bias in personal observation, more weight has been given to student achievement, but after critics highlighted problems with measuring teacher performance via standardized test scores, additional metrics and artifacts were included as well.
All of these changes have resulted in administrators spending more time on observation and evaluation, followed by copying notes between systems and drafting comments–rather than on timely, specific feedback that actually changes practice. “Even when I use Gemini or ChatGPT, I still spend 45 minutes rewriting to fit the district rubric,” one administrator noted.
“When I think about the evaluation landscape, two challenges rise to the surface,” said Dr. Quintin Shepherd, superintendent at Pflugerville Independent School District in Texas. “The first is the overwhelming volume of information evaluators must gather, interpret, and synthesize. The second is the persistent perception among teachers that evaluation is something being done to them rather than something being done for them. Both challenges point in the same direction: the need for a resource that gives evaluators more capacity and teachers more clarity, immediacy, and ownership. This is where AI becomes essential.”
What’s at stake
School leaders are under tremendous pressure. Time and resources are tight. Achieving benchmarks is non-negotiable. There’s plenty of data available to identify patterns and understand what’s working–but analyzing it is not easy when the data is housed in multiple platforms that may not interface with one another. Generic AI tools haven’t solved this.
For teachers, professional development opportunities abound, and student data is readily available. But often they don’t receive adequate instructional mentoring to ideate and try out new strategies.
Districts that have experimented with AI to provide automated feedback of transcribed recordings of instruction have found limited impact on teaching practices. Teachers report skepticism that the evolving tech tools are able to accurately assess what is happening in their classrooms. Recent randomized controlled trials show that automated feedback can move specific practices when teachers engage with it. But that’s exactly the challenge: Engagement is optional. Evaluations are not.
Teachers whose observations and evaluations are compromised or whose growth is stymied by lost opportunities for mentoring may lose out financially. For example, in Texas, the 2025-26 school year is the data capture period for the Teacher Incentive Allotment. This means fair and objective reviews are more important than ever for educators’ future earning potential.
For all of these reasons, the next wave of innovation has to live inside the required evaluation cycle, not off to the side as another “nice-to-have” tool.
Streamlining the process
My background at edtech companies has shown me how eager school leaders are to make data-informed decisions. But I know from countless conversations with administrators that they did not enter the education field to crunch numbers. They are motivated by seeing students thrive.
The breakthrough we need now is an AI-powered workspace that sits inside the evaluation system. Shepherd would like to see “AI that quietly assists with continuous evidence collection not through surveillance, but pattern recognition. It might analyze lesson materials for cognitive rigor, scan student work products to detect growth, or help teachers tag artifacts connected to standards.”
We have the technology to create a collaborative workspace that can be mapped to the district’s framework and used by administrators, coaches, support teams, and educators to capture notes from observations, link them to goals, provide guidance, share lesson artifacts, engage in feedback discussions, and track growth across cycles. After participating in a pilot of one such collaborative workspace, an evaluator said that “for the first time, I wasn’t rewriting my notes to make them fit the rubric. The system kept the feedback clear and instructional instead of just compliance-based.”
As a superintendent, Shepherd looks forward to AI support for helping make sense of complexity. “Evaluators juggle enormous qualitative loads: classroom culture, student engagement, instructional clarity, differentiation, formative assessment, and more. AI can act as a thinking partner, organizing trends, highlighting possible connections, identifying where to probe deeper, or offering research-based framing for feedback.”
The evaluation process will always be scrutinized, but what must change is whether it continues to drain time and trust or becomes a catalyst for better teaching. Shepherd expects the pace of adoption to pick up speed as the benefits for educators become clear: “Teachers will have access to immediate feedback loops and tools that help them analyze student work, reconsider lesson structures, or reflect on pacing and questioning. This strengthens professional agency and shifts evaluation from a compliance ritual to a growth process.”
Real leadership means moving beyond outdated processes and redesigning evaluation to center evidence, clarity, and authentic feedback. When evaluation stops being something to get through and becomes something that improves practice, we will finally see technology drive better teaching and learning.
Jena Draper, RefynED
Jena Draper is the CEO and founder of RefynED and a proven edtech entrepreneur who previously built CatchOn and led it through two successful acquisitions after reshaping how schools use data for instructional decision-making. She is now applying that same foresight to reinvent teacher evaluation as a catalyst for educator growth rather than compliance. Draper is recognized for turning systemic challenges into category-defining innovation that improves instructional practice at scale.
Latest posts by eSchool Media Contributors (see all)
Although the Next Generation Science Standards (NGSS) were released more than a decade ago, adoption of them varies widely in California. I have been to districts that have taken the standards and run with them, but others have been slow to get off the ground with NGSS–even 12 years after their release. In some cases, this is due to a lack of funding, a lack of staffing, or even administrators’ lack of understanding of the active, student-driven pedagogies championed by the NGSS.
Another potential challenge to implementing NGSS with fidelity comes from teachers’ and administrators’ epistemological beliefs–simply put, their beliefs about how people learn. Teachers bring so much of themselves to the classroom, and that means teaching in a way they think is going to help their students learn. So, it’s understandable that teachers who have found success with traditional lecture-based methods may be reluctant to embrace an inquiry-based approach. It also makes sense that administrators who are former teachers will expect classrooms to look the same as when they were teaching, which may mean students sitting in rows, facing the front, writing down notes.
Based on my experience as both a science educator and an administrator, here are some strategies for encouraging both teachers and administrators to embrace the NGSS.
For teachers: Shift expectations and embrace ‘organized chaos’
A helpful first step is to approach the NGSS not as a set of standards, but rather a set of performance expectations. Those expectations include all three dimensions of science learning: disciplinary core ideas (DCIs), science and engineering practices (SEPs), and cross-cutting concepts (CCCs). The DCIs reflect the things that students know, the SEPs reflect what students are doing, and the CCCs reflect how students think. This three-dimensional approach sets the stage for a more active, engaged learning environment where students construct their own understanding of science content knowledge.
To meet expectations laid out in the NGSS, teachers can start by modifying existing “recipe labs” to a more inquiry-based model that emphasizes student construction of knowledge. Resources like the NGSS-aligned digital curriculum from Kognity can simplify classroom implementation by providing a digital curriculum that empowers teachers with options for personalized instruction. Additionally, the Wonder of Science can help teachers integrate real-life phenomena into their NGSS-aligned labs to help provide students with real-life contexts to help build an understanding of scientific concepts related to. Lastly, Inquiry Hub offers open-source full-year curricula that can also aid teachers with refining their labs, classroom activities, and assessments.
For these updated labs to serve their purpose, teachers will need to reframe classroom management expectations to focus on student engagement and discussion. This may mean embracing what I call “organized chaos.” Over time, teachers will build a sense of efficacy through small successes, whether that’s spotting a studentconstructing their own knowledge or documenting an increased depth of knowledge in an entire class. The objective is to build on student understanding across the entire classroom, which teachers can do with much more confidence if they know that their administrators support them.
For administrators: Rethink evaluations and offer support
Arecent survey found that 59 percent of administrators in California, where I work, understood how to support teachers with implementing the NGSS. Despite this, some administrators may need to recalibrate their expectations of what they’ll see when they observe classrooms. What they might see is organized chaos happening: students out of their seats, students talking, students engaged in all different sorts of activities. This is what NGSS-aligned learning looks like.
To provide a clear focus on student-centered learning indicators, they can revise observation rubrics to align with NGSS, or make their lives easier and use this one. As administrators track their teachers’ NGSS implementation, it helps to monitor their confidence levels. There will always be early implementers who take something new and run with it, and these educators can be inspiring models for those who are less eager to change.
The overall goal for administrators is to make classrooms safe spaces for experimentation and growth. The more administrators understand about the NGSS, the better they can support teachers in implementing it. They may not know all the details of the DCIs, SEPs, and CCCs, but they must accept that the NGSS require students to be more active, with the teacher acting as more of a facilitator and guide, rather than the keeper of all the knowledge.
Based on my experience in both teaching and administration roles, I can say that constructivist science classrooms may look and sound different–with more student talk, more questioning, and more chaos. By understanding these differences and supporting teachers through this transition, administrators ensure that all California students develop the deeper scientific thinking that NGSS was designed to foster.
Nancy Nasr, Ed.D., Santa Paula Unified School District
Nancy Nasr is a science educator and administrator at Santa Paula Unified School District. She can be reached at [email protected].
Latest posts by eSchool Media Contributors (see all)
Recently, some colleagues and I released a paper about the experiences of neurodivergent PhD students. It’s a systematic review of the literature to date, which is currently under review, but available via pre-print here.
But reading each and every paper published about neurodivergent PhD students provoked strong feelings of rage and frustration. (These feelings only increased, with a tinge of fear added in, when I read of plans for the US health department to make a ‘list’ of autistic people?! Reading what is going on there is frankly terrifying – solidarity to all.) We all know what needs to be done to make research degrees more accessible. Make expectations explicit. Create flexible policies. Value diverse thinking styles. Implement Universal Design Principles… These suggestions appear in report after report, I’ve ranted on the blog here and here, yet real change remains frustratingly elusive. So why don’t these great ideas become reality? Here’s some thoughts on barriers that keep neurodivergent-friendly changes from taking hold.
The myth of meritocracy
Academia clings to the fiction that the current system rewards pure intellectual merit. Acknowledging the need for accessibility requires admitting that the playing field isn’t level. Many senior academics succeeded in the current system and genuinely believe “if I could do it, anyone can… if they work hard enough”. They are either 1) failing to recognise their neurotypical privilege, or 2) not acknowledging the cost of masking their own neurodivergence (I’ll get to this in a moment).
I’ve talked to many academics about things we could do – like getting rid of the dissertation – but too many of us are secretly proud of our own trauma. The harshness of the PhD has been compared to a badge of honour that we wear proudly – and expect others to earn.
Resource scarcity (real and perceived)
Universities often respond to suggestions about increased accessibility measures with budget concerns. The vibe is often: “We’d love to offer more support, but who will pay for it?”. However, many accommodations (like flexible deadlines or allowing students to work remotely) cost little, or even nothing. Frequently, the real issue isn’t resources but priorities of the powerful. There’s no denying universities (in Australia, and elsewhere) are often cash strapped. The academic hunger games are real. However, in the fight for resources, power dynamics dictate who gets fed and who goes without.
I wish we would just be honest about our choices – some people in universities still have huge travel budgets. The catering at some events is still pretty good. Some people seem to avoid every hiring freeze. There are consistent patterns in how resources are distributed. It’s the gaslighting that makes me angry. If we really want to, we can do most things. We have to want to do something about this.
Administrative inertia
Changing established processes in a university is like turning a battleship with a canoe paddle. Approval pathways are long and winding. For example, altering a single line in the research award rules at ANU requires approval from parliament (yes – the politicians actually have to get together and vote. Luckily we are not as dysfunctional in Australia as other places… yet). By the time a solution is implemented, the student who needed it has likely graduated – or dropped out. This creates a vicious cycle where the support staff, who see multiple generations of students suffer the same way, can get burned out and stop pushing for change.
The individualisation of disability
Universities tend to treat neurodivergence as an individual problem requiring individual accommodations rather than recognising systemic barriers. This puts the burden on students to disclose, request support, and advocate for themselves – precisely the executive function and communication challenges many neurodivergent students struggle with.
It’s akin to building a university with only stairs, then offering individual students a piggyback ride instead of installing ramps. I’ve met plenty of people who simply get so exhausted they don’t bother applying for the accommodations they desperately need, and then end up dropping out anyway.
Fear of lowering ‘standards’
Perhaps the most insidious barrier is the mistaken belief that accommodations somehow “lower standards.” I’ve heard academics worrying that flexible deadlines will “give some students an unfair advantage” or that making expectations explicit somehow “spoon-feeds” students.
The fear of “lowering standards” becomes even more puzzling when you look at how PhD requirements have inflated over time. Anyone who’s spent time in university archives knows that doctoral standards aren’t fixed – they’re constantly evolving. Pull a dissertation from the 1950s or 60s off the shelf and you’ll likely find something remarkably slim compared to today’s tomes. Many were essentially extended literature reviews with modest empirical components. Today, we expect multiple studies, theoretical innovations, methodological sophistication, and immediate publishability – all while completing within strict time limits on ever-shrinking funding.
The standards haven’t just increased; they’ve multiplied. So when universities resist accommodations that might “compromise standards,” we should ask: which era’s standards are we protecting? Certainly not the ones under which most people supervising today had to meet. The irony is that by making the PhD more accessible to neurodivergent thinkers, we might actually be raising standards – allowing truly innovative minds to contribute rather than filtering them out through irrelevant barriers like arbitrary deadlines or neurotypical communication expectations. The real threat to academic standards isn’t accommodation – it’s the loss of brilliant, unconventional thinkers who could push knowledge boundaries in ways we haven’t yet imagined.
Unexamined neurodiversity among supervisors
Perhaps one of the most overlooked barriers is that many supervisors are themselves neurodivergent but don’t recognise it or acknowledge what’s going on with them! In fact, since starting this research, I’ve formed a private view that you almost can’t succeed in this profession without at least a little neurospicey.
Academia tends to attract deep thinkers with intense focus on specific topics – traits often associated with autism (‘special interests’ anyone?). The contemporary university is constantly in crisis, which some people with ADHD can find provides the stimulation they need to get things done! Yet many supervisors have succeeded through decades of masking and compensating, often at great personal cost.
The problem is not the neurodivergence or the supervisor – it’s how the unexamined neurodivergence becomes embedded in practice, underpinned by an expectation that their students should function exactly as they do, complete with the same struggles they’ve internalised as “normal.”
I want to hold on to this idea for a moment, because maybe you recognise some of these supervisors:
The Hyperfocuser: Expects students to match their pattern of intense, extended work sessions. This supervisor regularly works through weekends on research “when inspiration strikes,” sending emails at 2am and expecting quick responses. They struggle to understand when students need breaks or maintain strict work boundaries, viewing it as “lack of passion.” Conveniently, they have ignored those couple of episodes of burn out, never considering their own work pattern might reflect ADHD or autistic hyper-focus, rather than superior work ethic.
The Process Pedant: Requires students to submit written work in highly specific formats with rigid attachment to particular reference styles, document formatting, and organisational structures. Gets disproportionately distressed by minor variations from their preferred system, focusing on these details over content, such that their feedback primarily addresses structural issues rather than ideas. I get more complaints about this than almost any other kind of supervision style – it’s so demoralising to be constantly corrected and not have someone genuinely engage with your work.
The Talker: Excels in spontaneous verbal feedback but rarely provides written comments. Expects students to take notes during rapid-fire conversational feedback, remembering all key points. They tend to tell you to do the same thing over and over, or forget what they have said and recommend something completely different next time. Can get mad when questioned over inconsistencies – suggesting you have a problem with listening. This supervisor never considers that their preference for verbal communication might reflect their own neurodivergent processing style, which isn’t universal. Couple this with a poor memory and the frustration of students reaches critical. (I confess, being a Talker is definitely my weakness as a supervisor – I warn my students in advance and make an effort to be open to criticism about it!).
The Context-Switching Avoider: Schedules all student meetings on a single day of the week, keeping other days “sacred” for uninterrupted research. Becomes noticeably agitated when asked to accommodate a meeting outside this structure, even for urgent matters. Instead of recognising their own need for predictable routines and difficulty with transitions (common in many forms of neurodivergence), they frame this as “proper time management” that students should always emulate. Students who have caring responsibilities suffer the most with this kind of inflexible relationship.
The Novelty-Chaser: Constantly introduces new theories, methodologies, or research directions in supervision meetings. Gets visibly excited about fresh perspectives and encourages students to incorporate them into already-developed projects. May send students a stream of articles or ideas completely tangential to their core research, expecting them to pivot accordingly. Never recognises that their difficulty maintaining focus on a single pathway to completion might reflect ADHD-related novelty-seeking. Students learn either 1) to chase butterflies and make little progress or 2) to nod politely at new suggestions while quietly continuing on their original track. The first kind of reaction can lead to a dangerous lack of progress, the second reaction can lead to real friction because, from the supervisor’s point of view, the student ‘never listens’. NO one is happy in these set ups, believe me.
The Theoretical Purist: Has devoted their career to a particular theoretical framework or methodology and expects all their students to work strictly within these boundaries. Dismisses alternative approaches as “methodologically unsound” or “lacking theoretical rigour” without substantive engagement. Becomes noticeably uncomfortable when students bring in cross-disciplinary perspectives, responding with increasingly rigid defences of their preferred approach. Fails to recognise their intense attachment to specific knowledge systems and resistance to integrating new perspectives may reflect autistic patterns of specialised interests, or even difficulty with cognitive flexibility. Students learn to frame all their ideas within the supervisor’s preferred language, even when doing so limits their research potential.
Now that I know what I am looking for, I see these supervisory dynamics ALL THE TIME. Add in whatever dash of neuro-spiciness is going on with you and all kinds of misunderstandings and hurt feelings result … Again – the problem is not the neurodivergence of any one person – it’s the lack of self reflection, coupled with the power dynamics that can make things toxic.
These barriers aren’t insurmountable, but honestly, after decades in this profession, I’m not holding my breath for institutional enlightenment. Universities move at the pace of bureaucracy after all.
So what do we do? If you’re neurodivergent, find your people – that informal network who “get it” will save your sanity more than any official university policy. If you’re a supervisor, maybe take a good hard look at your own quirky work habits before deciding your student is “difficult.” And if you’re in university management, please, for the love of research, let’s work on not making neurodivergent students jump through flaming bureaucratic hoops to get basic support.
The PhD doesn’t need to be a traumatic hazing ritual we inflict because “that’s how it was in my day.” It’s 2025. Time to admit that diverse brains make for better research. And for goodness sake, don’t put anyone on a damn list, ok?
AI disclaimer: This post was developed with Claude from Anthropic because I’m so busy with the burning trash fire that is 2025 it would not have happened otherwise. I provided the concept, core ideas, detailed content, and personal viewpoint while Claude helped organise and refine the text. We iteratively revised the content together to ensure it maintained my voice and perspective. The final post represents my authentic thoughts and experiences, with Claude serving as an editorial assistant and sounding board.
This blog was first published on Inger Mewburn’s legendary website The Thesis Whisperer on 1 May 2025. It is reproduced with permission here.
Professor Inger Mewburn is the Director of Researcher Development at The Australian National University where she oversees professional development workshops and programs for all ANU researchers. Aside from creating new posts on the Thesis Whisperer blog (www.thesiswhisperer.com), she writes scholarly papers and books about research education, with a special interest in post PhD employability, research communications and neurodivergence.
This blog builds on my presentation at the BERA ECR Conference 2024: at crossroads of becoming. It represents my personal reflections of working in UK higher education (HE) professional services roles and simultaneously gaining research experience through a Masters and Professional Doctorate in Education (EdD).
Professional service roles within UK HE include recognised professionals from other industries (eg human resources, finance, IT) and HE-specific roles such as academic quality, research support and student administration. Unlike academic staff, professional services staff are not typically required, or expected, to undertake research, yet many do. My own experience spans roles within six universities over 18 years delivering administration and policy that supports learning, teaching and students.
Traversing two tracks
In 2016, at an SRHE Newer Researchers event, I was asked to identify a metaphor to reflect my experience as a practitioner researcher. I chose this image of two train tracks as I have often felt that I have been on two development tracks simultaneously – one building professional experience and expertise, the other developing research skills and experience. These tracks ran in parallel, but never at the same pace, occasionally meeting on a shared project or assignment, and then continuing on their separate routes. I use this metaphor to share my experiences, and three phases, of becoming a professional services researcher.
Becoming research-informed: accelerating and expanding my professional track
The first phase was filled with opportunities; on my professional track I gained a breadth of experience, a toolkit of management and leadership skills, a portfolio of successful projects and built a strong network through professional associations (egAHEP). After three years, I started my research track with a masters in international higher education. Studying felt separate to my day job in academic quality and policy, but the assignments gave me opportunities to bring the tracks together, using research and theory to inform my practice – for example, exploring theoretical literature underpinning approaches to assessment whilst my institution was revising its own approach to assessing resits. I felt like a research-informed professional, and this positively impacted my professional work, accelerating and expanding my experience.
Becoming a doctoral researcher: long distance, slow speed
The second phase was more challenging. My doctoral journey was long, taking 9 years with two breaks. Like many part-time doctoral students, I struggled with balance and support, with unexpected personal and professional pressures, and I found it unsettling to simultaneously be an expert in my professional context yet a novice in research. I feared failure, and damaging my professional credibility as I found my voice in a research space.
What kept me going, balancing the two tracks, was building my own research support network and my researcher identity. Some of the ways I did this was through zoom calls with EdD peers for moral support, joining the Society for Research into Higher Education to find my place in the research field, and joining the editorial team of a practitioner journal to build my confidence in academic writing.
Becoming a professional services researcher: making the tracks converge
Having completed my doctorate in 2022, I’m now actively trying to bring my professional and research tracks together. Without a roadmap, I’ve started in my comfort-zone, sharing my doctoral research in ‘safe’ policy and practitioner spaces, where I thought my findings could have the biggest impact. I collaborated with EdD peers to tackle the daunting task of publishing my first article. I’ve drawn on my existing professional networks (ARC, JISC, QAA) to establish new research initiatives related to my current practice in managing assessment. I’ve made connections with fellow professional services researchers along my journey, and have established an online network to bring us together.
Key takeaways for professional services researchers
Bringing my professional experience and research tracks together has not been without challenges, but I am really positive about my journey so far, and for the potential impact professional services researchers could have on policy and practice in higher education. If you are on your own journey of becoming a professional services researcher, my advice is:
Make time for activities that build your research identity
Find collaborators and a community
Use your professional experience and networks
It’s challenging, but rewarding, so keep going!
Charlotte Verney is Head of Assessment at the University of Bristol. Charlotte is an early career researcher in higher education research and a leader in within higher education professional services. Her primary research interests are in the changing nature of administrative work within universities, using research approaches to solve professional problems in higher education management, and using creative and collaborative approaches to research. Charlotte advocates for making the academic research space more inclusive for early career and professional services researchers. She is co-convenor of the SRHE Newer Researchers Network and has established an online network for higher education professional services staff engaged with research.
Promoting sustainability literacy in higher education is crucial for deepening students’ pro-environmental behaviour and mindset (Buckler & Creech, 2014; UNESCO, 1997), while also fostering social transformation by embedding sustainability at the core of the student experience. In 2022, our group received an SRHE Scoping Award to synthesise the literature on the development, teaching, and assessment of sustainability literacy in non-STEM higher education programmes. We conducted a multilingual systematic review of post-2010 publications from the European Higher Education Area (EHEA), with the results summarised in Kalocsányiová et al (2024).
Out of 6,161 articles that we identified as potentially relevant, 92 studies met the inclusion criteria and are reviewed in the report. These studies involved a total of 11,790 participants and assessed 9,992 university programmes and courses. Our results suggest a significant growth in research interest in sustainability in non-STEM fields since 2017, with 75 studies published compared to just 17 in the preceding seven years. Our analysis also showed that Spain, the United Kingdom, Germany, Turkey, and Austria had the highest concentration of publications, with 25 EHEA countries represented in total. The 92 reviewed studies were characterised by high methodological diversity: nearly half employed quantitative methods (47%), followed by qualitative studies (40%) and mixed methods research (13%). Curriculum assessments using quantitative content analysis of degree and course descriptors were among the most common study types, followed by surveys and intervention or pilot studies. Curriculum assessments provided a systematic way to evaluate the presence or absence of sustainability concepts within curricula at both single HE institutions and in comparative frameworks. However, they often captured only surface-level indications of sustainability integration into undergraduate and postgraduate programmes, without providing evidence on actual implementation and/or the effectiveness of different initiatives. Qualitative methods, including descriptive case studies and interviews that focused on barriers, challenges, implementation strategies, and the acceptability of new sustainability literacy initiatives, made up 40% of the current research. Mixed methods studies accounted for 13% of the reviewed articles, often applying multiple assessment tools simultaneously, including quantitative sustainability competency assessment instruments combined with open-ended interviews or learning journals.
In terms of disciplines, Economics, Business, and Administrative Studies held the largest share of reviewed studies (26%), followed by Education (23%). Multiple disciplines accounted for 22% of the reviewed publications, reflecting the interconnected nature of sustainability. Finance and Accounting contributed only 6%, indicating a need for further research. Similarly, Language and Linguistics, Mass Communication and Documentation, and Social Sciences collectively represented only 12% of the reviewed studies. Creative Arts and Design with just 2% was also a niche area. Although caution should be exercised when drawing conclusions from these results, they highlight the need for more research within the underrepresented disciplines. This in turn can help promote awareness among non-STEM students, stimulate ethical discussions on the cultural dimensions of sustainability, and encourage creative solutions through interdisciplinary dialogue.
Regarding factors and themes explored, the studies focused primarily on the acquisition of sustainability knowledge and competencies (27%), curriculum assessment (23%), challenges and barriers to sustainability integration (10%), implementation and evaluation research (10%), changes in students’ mindset (9%), key competences in sustainability literacy (5%), and active student participation in Education for Sustainable Development (5%). In terms of studies discussing acquisition processes, key focus areas included the teaching of Sustainable Development Goals, awareness of macro-sustainability trends, and knowledge of local sustainability issues. Studies on sustainability competencies focussed on systems thinking, critical thinking, problem-solving skills, ethical awareness, interdisciplinary knowledge, global awareness and citizenship, communication skills, and action-oriented mindset. These competencies and knowledge, which are generally considered crucial for addressing the multifaceted challenges of sustainability (Wiek et al., 2011), were often introduced to non-STEM students through stand-alone lectures, workshops, or pilot studies involving new cross-disciplinary curricula.
Our review also highlighted a broad range of pedagogical approaches adopted for sustainability teaching and learning within non-STEM disciplines. These covered case and project-based learning, experiential learning methods, problem-based learning, collaborative learning, reflection groups, pedagogical dialogue, flipped classroom approaches, game-based learning, and service learning. While there is strong research interest in the documentation and implementation of these pedagogical approaches, few studies have so far attempted to assess learning outcomes, particularly regarding discipline-specific sustainability expertise and real-world problem-solving skills.
Many of the reviewed studies relied on single-method approaches, meaning valuable insights into sustainability-focused teaching and learning may have been missed. For instance, studies often failed to capture the complexities surrounding sustainability integration into non-STEM programs, either by presenting positivist results that require further contextualisation or by offering rich context limited to a single course or study group, which cannot be generalised. The assessment tools currently used also seemed to lack consistency, making it difficult to compare outcomes across programmes and institutions to promote best practices. More robust evaluation designs, such as longitudinal studies, controlled intervention studies, and mixed methods approaches (Gopalan et al, 2020; Ponce & Pagán-Maldonado, 2015), are needed to explore and demonstrate the pedagogical effectiveness of various sustainability literacy initiatives in non-STEM disciplines and their impact on student outcomes and societal change.
In summary, our review suggests good progress in integrating sustainability knowledge and competencies into some core non-STEM disciplines, while also highlighting gaps. Based on the results we have formulated some questions that may help steer future research:
Are there systemic barriers hindering the integration of sustainability themes, challenges and competencies into specific non-STEM fields?
Are certain disciplines receiving disproportionate research attention at the expense of others?
How do different pedagogical approaches compare in terms of effectiveness for fostering sustainability literacy in and across HE fields?
What new educational practices are emerging, and how can we fairly assess them and evidence their benefits for students and the environment?
We also would like to encourage other researchers to engage with knowledge produced in a variety of languages and educational contexts. The multilingual search and screening strategy implemented in our review enabled us to identify and retrieve evidence from 25 EHEA countries and 24 non-English publications. If reviews of education research remain monolingual (English-only), important findings and insights will go unnoticed hindering knowledge exchange, creativity, and innovation in HE.
Dr. Erika Kalocsányiová is a Senior Research Fellow with the Institute for Lifecourse Development at the University of Greenwich, with research centering on public health and sustainability communication, migration and multilingualism, refugee integration, and the implications of these areas for higher education policies.
Rania Hassan is a PhD student and a research assistant at the University of Greenwich. Her research centres on exploring enterprise development activities within emerging economies. As a multidisciplinary and interdisciplinary researcher, Rania is passionate about advancing academia and promoting knowledge exchange in higher education.
This is the final Student Blogging Challenge post until we start again in October 2020.
We know many of you have been working on the challenge during very unsettled times. Well done!
It’s time to reflect and celebrate.
Week Seven Recap
Many students are enjoying sharing posts about emojis.
You can find all the submitted tasks here (or click on the week 7 box on the sidebar).
Here’s just a handful of excellent work we spotted recently:
Tushilfrom New Zealand made a unique basketball emoji and has a guessing game.
Lainiefrom Australia invites you to guess the movies from the emoji clues.
Oliviashared some interesting details about break time in New Zealand.
Eleanorfrom the USA made an emoji story using the prompts from Byrdseed.
Livwrote an amazing poem about her school in New Zealand.
Ervinsfrom Latvia came up with some emoji maths. Can you guess the answers?
Serge Galligani’s class in France came up with a fun idea. The students turned their faces into emojis. Can you guess the answers?
Reminders
The forms for weeks 1-6 are closed. If you’re catching up you can submit your tasks from weeks 1-6 in the week eight Google Form in this post. All Google Forms will close May 24th.
If you’d like to join us again for the next challenge in October 2020, make sure you’re on our mailing list. You’ll get an email in September when registrations are open. Otherwise, check this blog in September for all the news.
Follow The Edublogger — If you’re not already receiving the email newsletter from Edublogs, maybe you’d like to sign up? I send out an email regularly sharing the latest blog post.
Thank You
The Student Blogging Challenge is a real team effort. We couldn’t do it without the support of our wonderful volunteers.
Many of our volunteers and participants have worked on the challenge during difficult circumstances. We admire your efforts immensely.
To Sue Wyatt…
Miss W/Tasteach/Sue Wyatt works tirelessly behind the scenes to help our commenting team, support participants, and keep our spreadsheets up to date. We appreciate you, Sue!
To Marg Grosfield…
Marg is a special commenter who does a wonderful job behind the scenes helping with the spreadsheets. Marg generously volunteers her time to ensure everyone is looked after. Thank you, Marg!
To our commenters…
Another big thank you goes to our team of commenters who provided an authentic audience for our students and classes each week. Your comments really helped our students with their confidence and motivation. We hope you’ll return again as a commenter in October.
To our participants…
It has been fantastic to see such enthusiastic participation from our students and teachers despite difficult circumstances! I hope you’ve all learned a lot and made some connections.
Spread the word about the next Student Blogging Challenge!
Summary Of The Student Blogging Challenge
We had a good number of registrations for this Student Blogging Challenger, however, participation was naturally down due to over 90% of the world’s student population being affected by school closures.
Let’s look at STUBC by the numbers…
Number of registered individual students: 1043
Number of registered classes: 111
Number of countries represented: 24
Number of tasks submitted
These are the edited numbers after incorrect and duplicate URLs were removed.
What Makes A Quality Blog Post?
I hope you’ve learned a lot throughout the Student Blogging Challenge! Perhaps if you look back to your posts from a few weeks ago you can see that you’ve improved.
To wrap all our learning up, I invite you to take a look at this poster. It goes over some of the essential ingredients of a quality blog post. You might have your own ideas too!
This week there are 3 tasks to complete. If you don’t have time to write a post, please just spend 5 minutes completing our survey. We’d really appreciate it!
Option One: Write a post on your blog reflecting on your participation in the challenge.
These are the sorts of prompts you could answer in your post:
How many weeks of the challenge did you participate in?
How many posts did you write in the 8 week period?
How many comments did you receive from classmates, teachers, or other visitors?
Which post did you enjoy writing the most and why?
Which web tools did you use to show creativity on your blog?
What are your plans for your blog now? Will you keep posting?
Option Two: Ask a friend or family member who might not have read your blog to do an audit.
Send them your blog URL and ask them some questions. For example:
What were your first impressions of my blog?
What captured your attention?
What distracted you on the blog?
What suggestions can you give me to improve my blog?
Task 3: Your Blogging Plans
While the Student Blogging Challenge is coming to an end, we hope this is not the end of your blogging journey. We encourage you to keep blogging and connecting. To do this well, you might need a plan.
Write a post about how you plan to keep blogging:
Perhaps you’d like to publish a list of ideas you have for future blog post topics.
Or, you could ask your readers for suggestions on what they’d like you to write about on your blog. You could even run a poll.
Write about anyone you have connected with throughout the challenge that you’d like to stay in touch with. Are there any blogs you’ll keep reading and commenting on?
Blog post ideas for students:
More advanced bloggers and teachers might enjoy these two posts on The Edublogger:
If you’re working as a class on this activity, perhaps students could contribute post ideas which the teacher compiles. Readers could be invited to comment or vote.
Student Certificates
Congratulations on completing the Student Blogging Challenge!
Download a certificate to celebrate your achievement.
Note for commenters: I’ll email you about accessing your certificate during the week.