Tag: evaluation

  • Five keys to success in Evaluation Capacity Building for widening participation

    Five keys to success in Evaluation Capacity Building for widening participation

    Evaluate, evaluate, evaluate is a mantra that those engaged in widening participation in recent years will be all too familiar with.

    Over the past decade and particularly in the latest round of Access and Participation Plans (APP), the importance of evaluation and evidencing best practice have risen up the agenda, becoming integral parts of the intervention strategies that institutions are committing to in order to address inequality.

    This new focus on evaluation raises fundamental questions about the sector’s capacity to sustainably deliver high-quality, rigorous and appropriate evaluations, particularly given its other regulatory and assessment demands (e.g. REF, TEF, KEF etc.).

    For many, the more exacting standards of evidence have triggered a scramble to deliver evaluation projects, often facilitated by external organisations, consultancies and experts, often at considerable expense, to deliver what the Office for Students’ (OfS) guidance has defined as Type 2 or 3 evidence (capable of correlative or causal inference).

    The need to demonstrate impact is one we can all agree is worthy, given the importance of addressing the deep rooted and pervasive inequalities baked into the UK HE sector. It is therefore crucial that the resources available are deployed wisely and equitably.

    In the rush for higher standards, it is easy to be lured in by “success” and forget the steps necessary to embed evaluation in institutions, ensuring a plurality of voices can contribute to the conversation, leading to a wider shift in culture and practice.

    We risk, in only listening to those well placed to deliver large-scale evaluation projects and communicate the findings loudest, of overlooking a huge amount of impactful and important work.

    Feeling a part of it

    There is no quick fix. The answer lies in the sustained work of embedding evaluative practice and culture within institutions, and across teams and individuals – a culture that imbues values of learning, growth and reflection over and above accountability and league tables.

    Evaluation Capacity Building (ECB) offers a model or approach to help address these ongoing challenges. It has a rich associated literature, which for brevity’s sake we will not delve into here.

    In essence, it describes the process of improving the ability of organisations to do and use evaluation, through supporting individuals, teams and decision makers to prioritise evaluation in planning and strategy and invest time and resources into improving knowledge and competency in this area.

    The following “keys to success” are the product of what we learned while applying this approach across widening participation and student success initiatives at Lancaster University.

    Identify why

    We could not have garnered the interest of those we worked with without having a clear idea of the reasons we were taking the approach we did. Critically, this has to work both ways: “why should you bother evaluating?” and “why are we trying to build evaluation capacity?”

    Unhelpfully, evaluation has a bad reputation.

    It is very often seen by those tasked to undertake it as an imposition, driven by external agendas and accountability mandates – not helped by the jargon laden and technical nature of the discipline.

    If you don’t take the time to identify and communicate your motivations for taking this approach, you risk falling at the first hurdle. People will be hesitant to invest their time in attending your training, understanding the challenging concepts and investing their limited resources into evaluation, unless they have a good reason to do so.

    “Because I told you so” does not amount to a very convincing reason either. When identifying “why”, it is best you do so collaboratively and consider the specific needs, values and aspirations of those you are working with. To those ends, you might want to consider developing a Theory of Change for your own ECB initiative.

    Consider the context

    When developing resources or a series of interventions to support ECB at your institution, you should at all times consider the specific context in which you find yourself. There are many models, methods and resources available in the evaluation space, including those provided by organisations such as TASO, the UK Evaluation Society (UKES) or the Global Evaluation Initiative (BetterEvaluation.org), not to mention the vast literature on evaluation methods and methodologies. The possibilities are both endless and potentially overwhelming.

    To help navigate this abundance, you should use the institutional context in which you are intending to deliver ECB as your guide. For whom are you developing the resources? What are their needs? What is appropriate? What is feasible? How much time, money and expertise does this require? Who is the audience for the evaluation? Why are they choosing to evaluate their work at this time and in this way?

    In answering these and other similar questions, the “why” you identified above, will be particularly helpful. Ensuring the resources and training you provide are suitable and accessible is not easy, so don’t be perturbed if you get it wrong. The key is to be reflective and seek feedback from those you are working with.

    Surround yourself with researchers, educationalists and practitioners

    Doing and using evaluation are highly prized skills that require specific knowledge and expertise. The same applies to developing training and educational resources to support effective learning and development outcomes.

    Evaluation is difficult enough for specialists to get their heads around. Imagine how it must feel for those for whom this is not an area of expertise, nor even a primary area of responsibility. Too often the training and support available assumes high levels of knowledge and does not take the time to explain its terms.

    How do we expect someone to understand the difference between correlative and causal evidence of impact, if we haven’t explained what we mean by evaluation, evidence or impact, not to mention correlation or causation? How do we expect people to implement an experimental evaluation design, if we haven’t explained what an evaluation design is, how you might implement it or how “experimental” differs from other kinds of design and when it is or isn’t appropriate?

    So, surround yourself with researchers, educators and practitioners who have a deep understanding of their respective domains and can help you to develop accessible and appropriate resources.

    Create outlets for evaluation insight

    Publishing findings can be daunting, time-consuming and risky. For this reason, it’s a good idea to create more localised outlets for the evaluation insights being generated by the ECB work you’ve been doing. This will allow the opportunity to hone presentations, interrogate findings and refine language in a more forgiving and collaborative space.

    At Lancaster University, we launched our Social Mobility Symposium in September 2023 with this purpose in mind. It provided a space for colleagues from across the University engaged in widening participation initiatives and with interests in wider issues of social mobility and inequality to come together and share the findings they generated through evaluation and research.

    As the title suggests, the event was not purely about evaluation, which helped to engage diverse audiences with the insights arising from our capacity building work. “Evaluation by stealth,” or couching evaluative insights in discussions of subjects that have wider appeal, can be an effective way of communicating your findings. It also encourages those who have conducted the evaluations to present their results in an accessible and applied manner.

    Establish leadership buy in

    Finally, if you are planning to explore ECB as an approach to embedding and nurturing evaluation at an institutional level (i.e. beyond the level of individual interventions), then it is critical to have the buy in of senior managers, leaders and decision makers.

    Part of the why for the teams you are working with will no doubt include some approximation of the following: that your efforts will be recognised, the insights generated will inform decision making, the analyses you do will make a difference, and will be shared widely to support learning and sharing of best practice.

    As someone who is supporting capacity building endeavours you might not be able to guarantee these objectives. It is important therefore to focus equal attention on building the evaluation capacity and literacy of those who can.

    This can be challenging and difficult to control for. It depends on, among other things: the established culture and personnel in leadership positions, their receptiveness to new ideas, the flexibility and courage they have to explore new ways of doing things, and the capacity of the institution to utilise the insights generated through more diverse evaluative practices. The rewards are potentially significant, both in supporting the institution to continuously improve and meet its ongoing regulatory requirements.

    There is great potential in the field of evaluation to empower and elevate voices that are sometimes overlooked, but there is an equal and opposite risk of disempowerment and exclusion. Reductive models of evaluation, preferencing certain methods over others, risk impoverishing our understanding of the world around us and the impact we are having. It is crucial to have at our disposal a repertoire of approaches that are appropriate to the situation at hand and that fosters learning as well as value assessment.

    Done well, ECB provides a means of enriching the narrative in widening participation, as well as many other areas, though it requires a coherent institutional and sectoral approach to be truly successful.

    Source link

  • What’s next in equality of opportunity evaluation?

    What’s next in equality of opportunity evaluation?

    In the Evaluation Collective – a cross-sector group of like-minded evaluation advocates – we have reason to celebrate two related interventions.

    One is the confirmation of a TASO and HEAT helmed evaluation library – the other John Blake’s recent Office for Students (OfS) blog What’s next in equality of opportunity regulation.

    We cheer his continued focus on evaluation and collaboration (both topics close to our collective heart). In particular, we raised imaginary (in some cases…) glasses to John Blake’s observation that:

    Ours is a sector founded on knowledge creation, curation, and communication, and all the skills of enquiry, synthesis and evidence-informed practice that drive the disciplines English HE providers research and teach, should also be turned to the vital priorities of expanding the numbers of students able to enter HE, and ensuring they have the best chance to succeed once they are admitted.

    That’s a hard YES from us.

    Indeed, there’s little in our Evaluation Manifesto (April 2022) that isn’t thinking along the same lines. Our final manifesto point addresses almost exactly this:

    The Evaluation Collective believe that higher education institutions should be learning organisations which promote thinking cultures and enact iterative and meaningful change. An expansive understanding of evaluation such as ours creates a space where this learning culture can flourish. There is a need to move the sector beyond simply seeking and receiving reported impact.

    We recognise that OfS has to maintain a balance between evaluation for accountability (they are our sector regulator after all) and evaluation for enhancement and learning.

    Evaluation in the latter mode often requires different thinking, methodologies and approaches. Given the concerning reversal of progress in HE access indicated by recent data this focus on learning and enhancement of our practice seems even more crucial.

    This brings us to two further collective thoughts.

    An intervention intervention

    John Blake’s blog references comments made by the Evaluation Collective’s Chair Liz Austen at the Unlocking the Future of Fair Access event. Liz’s point, which draws on a soon to be published book chapter, is that, from some perspectives, the term intervention automatically implies an evaluation approach that is positivistic and scientific – usually associated with Type 3 causal methodologies such as randomised control trials.

    This kind of language can be uncomfortable for those of us evaluating in different modes (and even spark the occasional paradigm war). Liz argued that much of the activity we undertake to address student success outcomes, such as developing inclusive learning, teaching, curriculum and assessment approaches is often more relational, dynamic, iterative and collaborative, as we engage with students, other stakeholders and draws on previous work and thinking from other disciplinary area.

    This is quite different to what we might think of as a clinical intervention, which often involves tight scientific control of external contextual factors, closed systems and clearly defined dosage.

    We suggest, therefore, that we might need a new language and conceptual approach to how we talk and think about evaluation and what it can achieve for HE providers and the students we support.

    The other area Liz picked up concerned the burden of evaluation not only on HE providers, but also the students who are necessarily deeply integrated in our evaluation work with varying degrees of agency – from subjects from whom data is extracted at one end through to co-creators and partners in the evaluation process at the other.

    We rely on students to dedicate sufficient time and effort in our evaluation activities. To reduce this burden and ensure we’re making effective use of student input, we need better coordination of regulatory asks for evaluation, not least to help manage the evaluative burden on students/student voices – a key point also made by students Molly Pemberton and Jordan Byrne at the event.

    As it is, HE providers are currently required to develop and invest in evaluation across multiple regulatory asks (TEF, APP, B3, Quality Code etc). While this space is not becoming too crowded (the more the merrier), it will take some strategic oversight to manage what is delivered and evaluated, why and by whom and look for efficiencies. We would welcome more sector work to join up this thinking.

    Positing repositories

    We also toasted John Blake’s continued emphasis on the crucial role of evaluation in continuous improvement.

    We must understand whether metrics moving is a response to our activity; without a clear explanation as to why things are getting better, we cannot scale or replicate that impact; if a well-theorised intervention does not deliver, good evaluation can support others to re-direct their efforts.

    In support of this, the new evidence repository to house the sector’s evaluation outcomes has been confirmed, with the aim of supporting our evolving practice and improve outcomes for students. This is another toast-worthy proposal. We believe that this resource is much needed.

    Indeed, Sheffield Hallam University started its own (publicly accessible) one a few years ago. Alan Donnelly has written an illuminating blog for the Evaluation Collective reflecting on the implementation, benefits and challenges of the approach.

    The decision to commission TASO and HEAT to develop this new Higher Education Evidence Library (HEEL), does however, beg a lot of questions about how material is selected for inclusion, who makes the selection and the criteria they use. Here are a few things we hope those organisations are considering.

    The first issue is that it is not clear whether this repository is merely primarily designed to address a regulatory requirement for HE providers to publish their evaluation findings or a resource developed to respond to the sector’s knowledge needs. This comes down to clarity of purpose and a clear-eyed view of where the sector needs to develop.

    It also comes down to the kinds of resources that will be considered for inclusion. We are also concerned by the prospect of a rigid and limited selection process and believe that useful and productive knowledge is contained in a wide range of publications. We would welcome, for example, a curation approach that recognised the value of non-academic publications.

    The contribution of grey literature and less formal publications, for example, is often overlooked. Valuable learning is also contained in evaluation and research conducted in other countries, and indeed, in different academic domains within the social and health sciences.

    The potential for translating interventions across different institutional and sector contexts also depends on sharing contextual and implementation information about the target activities and programmes.

    As colleagues from the Russell Group Widening Participation Evaluation Forum recently argued on these very pages, the value of sharing evaluation outcomes increases the more we move away from reporting technical and statistical outcomes to include broader reflections and meta-evaluation considerations, the more we collectively learn as a sector the more opportunities we will see for critical friendships and collaborations.

    While institutions are committing substantial time and resources to APP implementation, we must resist overly narrowing the remit of our activities and our approach in general. Learning from failed or even poor programmes and activities (and evaluation projects!) can be invaluable in driving progress.

    Ray Pawson speaks powerfully of the way in which “nuggets” of valuable learning and knowledge can be found even when panning less promising or unsuccessful evaluation evidence. Perhaps, a pragmatic approach to knowledge generation could trump methodological criteria in the interests of sector progress?

    Utopian repositories

    Hot on the HEELs of the TASO/HEAT evaluation library collaboration announcement we have put together a wish list for what we would like to see in such a resource. We believe that a well-considered, open and dynamic evaluation and evidence repository could have a significant impact on our collective progress towards closing stubborn equality of opportunity risk gaps.

    Submission to this kind of repository could also be helpful for the professionalisation of HE-based evaluation and good for organisational and sector recognition and career progression.

    A good model for this kind of approach is the National Teaching Repository (self-upload, no gatekeeper – their tag line “Disseminating accessible ideas that work”). This approach includes a way of tracking the impact and reach of submissions by allocating them a DOI.

    This is an issue that Alan and the Sheffield Hallam Team have also cracked, with submissions appearing in scholarly indexes.

    We are also mindful of the increasingly grim economic contexts in which most HE staff are currently working. If it does its job well, a repository could help mitigate some of the current constraints and pressures on institutions. Where we continue to work in silos there is a continued risk of wasting resources, by reinventing the same intervention and evaluation wheels in isolation across a multitude of different HE providers.

    With more openness and transparency, and sharing work in progress, as well as in completion, we increase the possibility of building on each other’s work, and, hopefully, finding opportunities for collaboration and sharing the workload, in other words efficiency gains.

    Moreover, this moves us closer to solving the replication and generalisability challenges, evaluators working together across different institutions can test programmes and activities across a wider set of contexts, resulting in more flexible and generalisable outcomes.

    Sliding doors?

    There are two further challenges, which are only nominally addressed in John Blake’s blog, but which we feel could have significant influence on the sector impact of the repository of our dreams.

    First, effective knowledge management is essential – how will time-pressed practitioners find and apply relevant evidence to their contexts? The repository needs to go beyond storing evaluations to include support to help users to find what they need, when they need it, and include recommendations for implications for practice.

    Second, drawing on the development of Implementation Science in fields like medicine and public health could help maximize the repository’s impact on practice. We suggest early consultation with both sector stakeholders and experts from other fields who have successfully tackled these knowledge-to-practice challenges.

    At this point in thinking, before concrete development and implementation have taken place, we have the potential for a multitude of possible future repositories and approaches to sector evaluation. We welcome TASO and HEAT’s offer to consult with the sector over the spring as they develop their HEEL and hope to engage in a broad and wide-ranging discussion of how we can collectively design an evaluation and evidence repository that is not just about collecting together artefacts, but which could play an active role in driving impactful practice. And then we can start talking about how the repository can be evaluated.

    John Blake will be talking all things evaluation with members of the Evaluation Collective on the 11th March. Sign up to the EC membership for more details: https://evaluationcollective.wordpress.com/

    Source link