Tag: evaluation

  • How teachers and administrators can overcome resistance to NGSS

    How teachers and administrators can overcome resistance to NGSS

    Key points:

    Although the Next Generation Science Standards (NGSS) were released more than a decade ago, adoption of them varies widely in California. I have been to districts that have taken the standards and run with them, but others have been slow to get off the ground with NGSS–even 12 years after their release. In some cases, this is due to a lack of funding, a lack of staffing, or even administrators’ lack of understanding of the active, student-driven pedagogies championed by the NGSS.

    Another potential challenge to implementing NGSS with fidelity comes from teachers’ and administrators’ epistemological beliefs–simply put, their beliefs about how people learn. Teachers bring so much of themselves to the classroom, and that means teaching in a way they think is going to help their students learn. So, it’s understandable that teachers who have found success with traditional lecture-based methods may be reluctant to embrace an inquiry-based approach. It also makes sense that administrators who are former teachers will expect classrooms to look the same as when they were teaching, which may mean students sitting in rows, facing the front, writing down notes.

    Based on my experience as both a science educator and an administrator, here are some strategies for encouraging both teachers and administrators to embrace the NGSS.

    For teachers: Shift expectations and embrace ‘organized chaos’

    A helpful first step is to approach the NGSS not as a set of standards, but rather a set of performance expectations. Those expectations include all three dimensions of science learning: disciplinary core ideas (DCIs), science and engineering practices (SEPs), and cross-cutting concepts (CCCs). The DCIs reflect the things that students know, the SEPs reflect what students are doing, and the CCCs reflect how students think. This three-dimensional approach sets the stage for a more active, engaged learning environment where students construct their own understanding of science content knowledge.

    To meet expectations laid out in the NGSS, teachers can start by modifying existing “recipe labs” to a more inquiry-based model that emphasizes student construction of knowledge. Resources like the NGSS-aligned digital curriculum from Kognity can simplify classroom implementation by providing a digital curriculum that empowers teachers with options for personalized instruction. Additionally, the Wonder of Science can help teachers integrate real-life phenomena into their NGSS-aligned labs to help provide students with real-life contexts to help build an understanding of scientific concepts related to. Lastly, Inquiry Hub offers open-source full-year curricula that can also aid teachers with refining their labs, classroom activities, and assessments.  

    For these updated labs to serve their purpose, teachers will need to reframe classroom management expectations to focus on student engagement and discussion. This may mean embracing what I call “organized chaos.” Over time, teachers will build a sense of efficacy through small successes, whether that’s spotting a studentconstructing their own knowledge or documenting an increased depth of knowledge in an entire class. The objective is to build on student understanding across the entire classroom, which teachers can do with much more confidence if they know that their administrators support them.

    For administrators: Rethink evaluations and offer support

    A recent survey found that 59 percent of administrators in California, where I work, understood how to support teachers with implementing the NGSS. Despite this, some administrators may need to recalibrate their expectations of what they’ll see when they observe classrooms. What they might see is organized chaos happening: students out of their seats, students talking, students engaged in all different sorts of activities. This is what NGSS-aligned learning looks like. 

    To provide a clear focus on student-centered learning indicators, they can revise observation rubrics to align with NGSS, or make their lives easier and use this one. As administrators track their teachers’ NGSS implementation, it helps to monitor their confidence levels. There will always be early implementers who take something new and run with it, and these educators can be inspiring models for those who are less eager to change.

    The overall goal for administrators is to make classrooms safe spaces for experimentation and growth. The more administrators understand about the NGSS, the better they can support teachers in implementing it. They may not know all the details of the DCIs, SEPs, and CCCs, but they must accept that the NGSS require students to be more active, with the teacher acting as more of a facilitator and guide, rather than the keeper of all the knowledge.

    Based on my experience in both teaching and administration roles, I can say that constructivist science classrooms may look and sound different–with more student talk, more questioning, and more chaos. By understanding these differences and supporting teachers through this transition, administrators ensure that all California students develop the deeper scientific thinking that NGSS was designed to foster.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • How Technology Can Smooth Pain Points in Credit Evaluation

    How Technology Can Smooth Pain Points in Credit Evaluation

    Earlier this month, higher education policy leaders from all 50 states gathered in Minneapolis for the 2025 State Higher Education Executive Officers Higher Education Policy Conference. During a plenary session on the future of learning and work and its implications for higher education, Aneesh Raman, chief economic opportunity officer at LinkedIn, reflected on the growing need for people to be able to easily build and showcase their skills.

    In response to this need, the avenues for learning have expanded, with high numbers of Americans now completing career-relevant training and skill-building through MOOCs, microcredentials and short-term certificates, as well as a growing number of students completing postsecondary coursework while in high school through dual enrollment.

    The time for pontificating about the implications for higher education is past; what’s needed now is a pragmatic examination of our long-standing practices to ask, how do we evolve to keep up? We find it prudent and compelling to begin at the beginning—that is, with the learning-evaluation process (aka credit-evaluation process), as it stands to either help integrate more Americans into higher education or serve to push them out.

    A 2024 survey of adult Americans conducted by Public Agenda for Sova and the Beyond Transfer Policy Advisory Board found, for example, that nearly four in 10 respondents attempted to transfer some type of credit toward a college credential. This included credit earned through traditional college enrollment and from nontraditional avenues, such as from trade/vocational school, from industry certification and from work or military experience. Of those who tried to transfer credit, 65 percent reported one or more negative experiences, including having to repeat prior courses, feeling limited in where they could enroll based on how their prior learning was counted and running out of financial aid when their prior learning was not counted. Worse, 16 percent gave up on earning a college credential altogether because the process of transferring credit was too difficult.

    What if that process were drastically improved? The Council for Adult and Experiential Learning’s research on adult learners finds that 84 percent of likely enrollees and 55 percent of those less likely to enroll agree that the ability to receive credit for their work and life experience would have a strong influence on their college enrollment plans. Recognizing the untapped potential for both learners and institutions, we are working with a distinguished group of college and university leaders, accreditors, policy researchers and advocates who form the Learning Evaluation and Recognition for the Next Generation (LEARN) Commission to identify ways to improve learning mobility and promote credential completion.

    With support from the American Association of Collegiate Registrars and Admissions Officers and Sova, the LEARN Commission has been analyzing the available research to better understand the limitations of and challenges within current learning evaluation approaches, finding that:

    • Learning-evaluation decision-making is a highly manual and time-intensive process that involves many campus professionals, including back-office staff such as registrars and transcript evaluators and academic personnel such as deans and faculty.
    • Across institutions, there is high variability in who performs reviews; what information and criteria are used in decision-making; how decisions are communicated, recorded and analyzed; and how long the process takes.
    • Along with this variability, most evaluation decisions are opaque, with little data used, criteria established or transparency baked in to help campus stakeholders understand how these decisions are working for learners.
    • While there have been substantial efforts to identify course equivalencies, develop articulation agreements and create frameworks for credit for prior learning to make learning evaluation more transparent and consistent, the data and technology infrastructure to support the work remain woefully underdeveloped. Without adequate data documenting date of assessment and aligned learning outcomes, credit for prior learning is often dismissed in the transfer process; for example, a 2024 survey by AACRAO found that 54 percent of its member institutions do not accept credit for prior learning awarded at a prior institution.

    Qualitative research examining credit-evaluation processes across public two- and four-year institutions in California found that these factors create many pain points for learners. For one, students can experience unacceptable wait times—in some cases as long as 24 weeks—before receiving evaluation decisions. When decisions are not finalized prior to registration deadlines, students can end up in the wrong classes, take classes out of sequence or end up extending their time to graduation.

    In addition to adverse impacts on students, MDRC research illuminates challenges that faculty and staff experience due to the highly manual nature of current processes. As colleges face dwindling dollars and real personnel capacity constraints, the status quo becomes unsustainable and untenable. Yet, we are hopeful that the thoughtful application of technology—including AI—can help slingshot institutions forward.

    For example, institutions like Arizona State University and the City University of New York are leading the way in integrating technology to improve the student experience. The ASU Transfer Guide and CUNY’s Transfer Explorer democratize course equivalency information, “making it easy to see how course credits and prior learning experiences will transfer and count.” Further, researchers at UC Berkeley are studying how to leverage the plethora of data available—including course catalog descriptions, course articulation agreements and student enrollment data—to analyze existing course equivalencies and provide recommendations for additional courses that could be deemed equivalent. Such advances stand to reduce the staff burden for institutions while preserving academic quality.

    While such solutions are not yet widely implemented, there is strong interest due to their high value proposition. A recent AACRAO survey on AI in credit mobility found that while just 15 percent of respondents report currently using AI for credit mobility, 94 percent of respondents acknowledge the technology’s potential to positively transform credit-evaluation processes. And just this year, a cohort of institutions across the country came together to pioneer new AI-enabled credit mobility technology under the AI Transfer and Articulation Infrastructure Network.

    As the LEARN Commission continues to assess how institutions, systems of higher education and policymakers can improve learning evaluation, we believe that increased attention to improving course data and technology infrastructure is warranted and that a set of principles can guide a new approach to credit evaluation. Based on our emerging sense of the needs and opportunities in the field, we offer some guiding principles below:

    1. Shift away from interrogating course minutiae to center learning outcomes in learning evaluation. Rather than fixating on factors like mode of instruction or grading basis, we must focus on the learning outcomes. To do so, we must improve course data in a number of ways, including adding learning outcomes to course syllabi and catalog descriptions and capturing existing equivalencies in databases where they can be easily referenced and applied.
    2. Provide students with reliable, timely information on the degree applicability of their courses and prior learning, including a rationale when prior learning is not accepted or applied. Institutions can leverage available technology to automate existing articulation rules, recommend new equivalencies and generate timely evaluation reports for students. This can create more efficient advising workflows, empower learners with reliable information and refocus faculty time to other essential work (see No.3).
    1. Use student outcomes data to improve the learning evaluation process. Right now, the default is that all prior learning is manually vetted against existing courses. But what if we shifted that focus to analyzing student outcomes data to understand whether students can be successful in subsequent learning if their credits are transferred and applied? In addition, institutions should regularly review course transfer, applicability and student success data at the department and institution level to identify areas for improvement—including in the design of curricular pathways, student supports and classroom pedagogy.
    2. Overhaul how learning is transcripted and how transcripts are shared. We can shorten the time involved on the front end of credit-evaluation processes by shifting away from manual transcript review to machine-readable transcripts and electronic transcript transmittal. When accepting and applying prior learning—be it high school dual-enrollment credit, credit for prior learning or a course transferred from another institution—document that learning in the transcript as a course (or, as a competency for competency-based programs) to promote its future transferability.
    3. Leverage available technology to help learners and workers make informed decisions to reach their end goals. In the realm of learning evaluation, this can be facilitated by integrating course data and equivalency systems with degree-modeling software to enable learners and advisers to identify the best path to a credential that minimizes the amount of learning that’s left on the table.

    In these ways, we can redesign learning evaluation processes to accelerate students’ pathways and generate meaningful value in the changing landscape of learning and work. Through the LEARN Commission, we will continue to refine this vision and identify clear actionable steps. Stay tuned for the release of our full set of recommendations this fall and join the conversation at #BeyondTransfer.

    Beth Doyle is chief of strategy at the Council for Adult and Experiential Learning and is a member of the LEARN Commission.

    Carolyn Gentle-Genitty is the inaugural dean of Founder’s College at Butler University and is a member of the LEARN Commission.

    Jamienne S. Studley is the immediate past president of the WASC Senior College and University Commission and is a member of the LEARN Commission.

    Source link

  • Beyond Evaluation: Using Peer Observation to Strengthen Teaching Practices – Faculty Focus

    Beyond Evaluation: Using Peer Observation to Strengthen Teaching Practices – Faculty Focus

    Source link

  • Beyond Evaluation: Using Peer Observation to Strengthen Teaching Practices – Faculty Focus

    Beyond Evaluation: Using Peer Observation to Strengthen Teaching Practices – Faculty Focus

    Source link

  • Five keys to success in Evaluation Capacity Building for widening participation

    Five keys to success in Evaluation Capacity Building for widening participation

    Evaluate, evaluate, evaluate is a mantra that those engaged in widening participation in recent years will be all too familiar with.

    Over the past decade and particularly in the latest round of Access and Participation Plans (APP), the importance of evaluation and evidencing best practice have risen up the agenda, becoming integral parts of the intervention strategies that institutions are committing to in order to address inequality.

    This new focus on evaluation raises fundamental questions about the sector’s capacity to sustainably deliver high-quality, rigorous and appropriate evaluations, particularly given its other regulatory and assessment demands (e.g. REF, TEF, KEF etc.).

    For many, the more exacting standards of evidence have triggered a scramble to deliver evaluation projects, often facilitated by external organisations, consultancies and experts, often at considerable expense, to deliver what the Office for Students’ (OfS) guidance has defined as Type 2 or 3 evidence (capable of correlative or causal inference).

    The need to demonstrate impact is one we can all agree is worthy, given the importance of addressing the deep rooted and pervasive inequalities baked into the UK HE sector. It is therefore crucial that the resources available are deployed wisely and equitably.

    In the rush for higher standards, it is easy to be lured in by “success” and forget the steps necessary to embed evaluation in institutions, ensuring a plurality of voices can contribute to the conversation, leading to a wider shift in culture and practice.

    We risk, in only listening to those well placed to deliver large-scale evaluation projects and communicate the findings loudest, of overlooking a huge amount of impactful and important work.

    Feeling a part of it

    There is no quick fix. The answer lies in the sustained work of embedding evaluative practice and culture within institutions, and across teams and individuals – a culture that imbues values of learning, growth and reflection over and above accountability and league tables.

    Evaluation Capacity Building (ECB) offers a model or approach to help address these ongoing challenges. It has a rich associated literature, which for brevity’s sake we will not delve into here.

    In essence, it describes the process of improving the ability of organisations to do and use evaluation, through supporting individuals, teams and decision makers to prioritise evaluation in planning and strategy and invest time and resources into improving knowledge and competency in this area.

    The following “keys to success” are the product of what we learned while applying this approach across widening participation and student success initiatives at Lancaster University.

    Identify why

    We could not have garnered the interest of those we worked with without having a clear idea of the reasons we were taking the approach we did. Critically, this has to work both ways: “why should you bother evaluating?” and “why are we trying to build evaluation capacity?”

    Unhelpfully, evaluation has a bad reputation.

    It is very often seen by those tasked to undertake it as an imposition, driven by external agendas and accountability mandates – not helped by the jargon laden and technical nature of the discipline.

    If you don’t take the time to identify and communicate your motivations for taking this approach, you risk falling at the first hurdle. People will be hesitant to invest their time in attending your training, understanding the challenging concepts and investing their limited resources into evaluation, unless they have a good reason to do so.

    “Because I told you so” does not amount to a very convincing reason either. When identifying “why”, it is best you do so collaboratively and consider the specific needs, values and aspirations of those you are working with. To those ends, you might want to consider developing a Theory of Change for your own ECB initiative.

    Consider the context

    When developing resources or a series of interventions to support ECB at your institution, you should at all times consider the specific context in which you find yourself. There are many models, methods and resources available in the evaluation space, including those provided by organisations such as TASO, the UK Evaluation Society (UKES) or the Global Evaluation Initiative (BetterEvaluation.org), not to mention the vast literature on evaluation methods and methodologies. The possibilities are both endless and potentially overwhelming.

    To help navigate this abundance, you should use the institutional context in which you are intending to deliver ECB as your guide. For whom are you developing the resources? What are their needs? What is appropriate? What is feasible? How much time, money and expertise does this require? Who is the audience for the evaluation? Why are they choosing to evaluate their work at this time and in this way?

    In answering these and other similar questions, the “why” you identified above, will be particularly helpful. Ensuring the resources and training you provide are suitable and accessible is not easy, so don’t be perturbed if you get it wrong. The key is to be reflective and seek feedback from those you are working with.

    Surround yourself with researchers, educationalists and practitioners

    Doing and using evaluation are highly prized skills that require specific knowledge and expertise. The same applies to developing training and educational resources to support effective learning and development outcomes.

    Evaluation is difficult enough for specialists to get their heads around. Imagine how it must feel for those for whom this is not an area of expertise, nor even a primary area of responsibility. Too often the training and support available assumes high levels of knowledge and does not take the time to explain its terms.

    How do we expect someone to understand the difference between correlative and causal evidence of impact, if we haven’t explained what we mean by evaluation, evidence or impact, not to mention correlation or causation? How do we expect people to implement an experimental evaluation design, if we haven’t explained what an evaluation design is, how you might implement it or how “experimental” differs from other kinds of design and when it is or isn’t appropriate?

    So, surround yourself with researchers, educators and practitioners who have a deep understanding of their respective domains and can help you to develop accessible and appropriate resources.

    Create outlets for evaluation insight

    Publishing findings can be daunting, time-consuming and risky. For this reason, it’s a good idea to create more localised outlets for the evaluation insights being generated by the ECB work you’ve been doing. This will allow the opportunity to hone presentations, interrogate findings and refine language in a more forgiving and collaborative space.

    At Lancaster University, we launched our Social Mobility Symposium in September 2023 with this purpose in mind. It provided a space for colleagues from across the University engaged in widening participation initiatives and with interests in wider issues of social mobility and inequality to come together and share the findings they generated through evaluation and research.

    As the title suggests, the event was not purely about evaluation, which helped to engage diverse audiences with the insights arising from our capacity building work. “Evaluation by stealth,” or couching evaluative insights in discussions of subjects that have wider appeal, can be an effective way of communicating your findings. It also encourages those who have conducted the evaluations to present their results in an accessible and applied manner.

    Establish leadership buy in

    Finally, if you are planning to explore ECB as an approach to embedding and nurturing evaluation at an institutional level (i.e. beyond the level of individual interventions), then it is critical to have the buy in of senior managers, leaders and decision makers.

    Part of the why for the teams you are working with will no doubt include some approximation of the following: that your efforts will be recognised, the insights generated will inform decision making, the analyses you do will make a difference, and will be shared widely to support learning and sharing of best practice.

    As someone who is supporting capacity building endeavours you might not be able to guarantee these objectives. It is important therefore to focus equal attention on building the evaluation capacity and literacy of those who can.

    This can be challenging and difficult to control for. It depends on, among other things: the established culture and personnel in leadership positions, their receptiveness to new ideas, the flexibility and courage they have to explore new ways of doing things, and the capacity of the institution to utilise the insights generated through more diverse evaluative practices. The rewards are potentially significant, both in supporting the institution to continuously improve and meet its ongoing regulatory requirements.

    There is great potential in the field of evaluation to empower and elevate voices that are sometimes overlooked, but there is an equal and opposite risk of disempowerment and exclusion. Reductive models of evaluation, preferencing certain methods over others, risk impoverishing our understanding of the world around us and the impact we are having. It is crucial to have at our disposal a repertoire of approaches that are appropriate to the situation at hand and that fosters learning as well as value assessment.

    Done well, ECB provides a means of enriching the narrative in widening participation, as well as many other areas, though it requires a coherent institutional and sectoral approach to be truly successful.

    Source link

  • What’s next in equality of opportunity evaluation?

    What’s next in equality of opportunity evaluation?

    In the Evaluation Collective – a cross-sector group of like-minded evaluation advocates – we have reason to celebrate two related interventions.

    One is the confirmation of a TASO and HEAT helmed evaluation library – the other John Blake’s recent Office for Students (OfS) blog What’s next in equality of opportunity regulation.

    We cheer his continued focus on evaluation and collaboration (both topics close to our collective heart). In particular, we raised imaginary (in some cases…) glasses to John Blake’s observation that:

    Ours is a sector founded on knowledge creation, curation, and communication, and all the skills of enquiry, synthesis and evidence-informed practice that drive the disciplines English HE providers research and teach, should also be turned to the vital priorities of expanding the numbers of students able to enter HE, and ensuring they have the best chance to succeed once they are admitted.

    That’s a hard YES from us.

    Indeed, there’s little in our Evaluation Manifesto (April 2022) that isn’t thinking along the same lines. Our final manifesto point addresses almost exactly this:

    The Evaluation Collective believe that higher education institutions should be learning organisations which promote thinking cultures and enact iterative and meaningful change. An expansive understanding of evaluation such as ours creates a space where this learning culture can flourish. There is a need to move the sector beyond simply seeking and receiving reported impact.

    We recognise that OfS has to maintain a balance between evaluation for accountability (they are our sector regulator after all) and evaluation for enhancement and learning.

    Evaluation in the latter mode often requires different thinking, methodologies and approaches. Given the concerning reversal of progress in HE access indicated by recent data this focus on learning and enhancement of our practice seems even more crucial.

    This brings us to two further collective thoughts.

    An intervention intervention

    John Blake’s blog references comments made by the Evaluation Collective’s Chair Liz Austen at the Unlocking the Future of Fair Access event. Liz’s point, which draws on a soon to be published book chapter, is that, from some perspectives, the term intervention automatically implies an evaluation approach that is positivistic and scientific – usually associated with Type 3 causal methodologies such as randomised control trials.

    This kind of language can be uncomfortable for those of us evaluating in different modes (and even spark the occasional paradigm war). Liz argued that much of the activity we undertake to address student success outcomes, such as developing inclusive learning, teaching, curriculum and assessment approaches is often more relational, dynamic, iterative and collaborative, as we engage with students, other stakeholders and draws on previous work and thinking from other disciplinary area.

    This is quite different to what we might think of as a clinical intervention, which often involves tight scientific control of external contextual factors, closed systems and clearly defined dosage.

    We suggest, therefore, that we might need a new language and conceptual approach to how we talk and think about evaluation and what it can achieve for HE providers and the students we support.

    The other area Liz picked up concerned the burden of evaluation not only on HE providers, but also the students who are necessarily deeply integrated in our evaluation work with varying degrees of agency – from subjects from whom data is extracted at one end through to co-creators and partners in the evaluation process at the other.

    We rely on students to dedicate sufficient time and effort in our evaluation activities. To reduce this burden and ensure we’re making effective use of student input, we need better coordination of regulatory asks for evaluation, not least to help manage the evaluative burden on students/student voices – a key point also made by students Molly Pemberton and Jordan Byrne at the event.

    As it is, HE providers are currently required to develop and invest in evaluation across multiple regulatory asks (TEF, APP, B3, Quality Code etc). While this space is not becoming too crowded (the more the merrier), it will take some strategic oversight to manage what is delivered and evaluated, why and by whom and look for efficiencies. We would welcome more sector work to join up this thinking.

    Positing repositories

    We also toasted John Blake’s continued emphasis on the crucial role of evaluation in continuous improvement.

    We must understand whether metrics moving is a response to our activity; without a clear explanation as to why things are getting better, we cannot scale or replicate that impact; if a well-theorised intervention does not deliver, good evaluation can support others to re-direct their efforts.

    In support of this, the new evidence repository to house the sector’s evaluation outcomes has been confirmed, with the aim of supporting our evolving practice and improve outcomes for students. This is another toast-worthy proposal. We believe that this resource is much needed.

    Indeed, Sheffield Hallam University started its own (publicly accessible) one a few years ago. Alan Donnelly has written an illuminating blog for the Evaluation Collective reflecting on the implementation, benefits and challenges of the approach.

    The decision to commission TASO and HEAT to develop this new Higher Education Evidence Library (HEEL), does however, beg a lot of questions about how material is selected for inclusion, who makes the selection and the criteria they use. Here are a few things we hope those organisations are considering.

    The first issue is that it is not clear whether this repository is merely primarily designed to address a regulatory requirement for HE providers to publish their evaluation findings or a resource developed to respond to the sector’s knowledge needs. This comes down to clarity of purpose and a clear-eyed view of where the sector needs to develop.

    It also comes down to the kinds of resources that will be considered for inclusion. We are also concerned by the prospect of a rigid and limited selection process and believe that useful and productive knowledge is contained in a wide range of publications. We would welcome, for example, a curation approach that recognised the value of non-academic publications.

    The contribution of grey literature and less formal publications, for example, is often overlooked. Valuable learning is also contained in evaluation and research conducted in other countries, and indeed, in different academic domains within the social and health sciences.

    The potential for translating interventions across different institutional and sector contexts also depends on sharing contextual and implementation information about the target activities and programmes.

    As colleagues from the Russell Group Widening Participation Evaluation Forum recently argued on these very pages, the value of sharing evaluation outcomes increases the more we move away from reporting technical and statistical outcomes to include broader reflections and meta-evaluation considerations, the more we collectively learn as a sector the more opportunities we will see for critical friendships and collaborations.

    While institutions are committing substantial time and resources to APP implementation, we must resist overly narrowing the remit of our activities and our approach in general. Learning from failed or even poor programmes and activities (and evaluation projects!) can be invaluable in driving progress.

    Ray Pawson speaks powerfully of the way in which “nuggets” of valuable learning and knowledge can be found even when panning less promising or unsuccessful evaluation evidence. Perhaps, a pragmatic approach to knowledge generation could trump methodological criteria in the interests of sector progress?

    Utopian repositories

    Hot on the HEELs of the TASO/HEAT evaluation library collaboration announcement we have put together a wish list for what we would like to see in such a resource. We believe that a well-considered, open and dynamic evaluation and evidence repository could have a significant impact on our collective progress towards closing stubborn equality of opportunity risk gaps.

    Submission to this kind of repository could also be helpful for the professionalisation of HE-based evaluation and good for organisational and sector recognition and career progression.

    A good model for this kind of approach is the National Teaching Repository (self-upload, no gatekeeper – their tag line “Disseminating accessible ideas that work”). This approach includes a way of tracking the impact and reach of submissions by allocating them a DOI.

    This is an issue that Alan and the Sheffield Hallam Team have also cracked, with submissions appearing in scholarly indexes.

    We are also mindful of the increasingly grim economic contexts in which most HE staff are currently working. If it does its job well, a repository could help mitigate some of the current constraints and pressures on institutions. Where we continue to work in silos there is a continued risk of wasting resources, by reinventing the same intervention and evaluation wheels in isolation across a multitude of different HE providers.

    With more openness and transparency, and sharing work in progress, as well as in completion, we increase the possibility of building on each other’s work, and, hopefully, finding opportunities for collaboration and sharing the workload, in other words efficiency gains.

    Moreover, this moves us closer to solving the replication and generalisability challenges, evaluators working together across different institutions can test programmes and activities across a wider set of contexts, resulting in more flexible and generalisable outcomes.

    Sliding doors?

    There are two further challenges, which are only nominally addressed in John Blake’s blog, but which we feel could have significant influence on the sector impact of the repository of our dreams.

    First, effective knowledge management is essential – how will time-pressed practitioners find and apply relevant evidence to their contexts? The repository needs to go beyond storing evaluations to include support to help users to find what they need, when they need it, and include recommendations for implications for practice.

    Second, drawing on the development of Implementation Science in fields like medicine and public health could help maximize the repository’s impact on practice. We suggest early consultation with both sector stakeholders and experts from other fields who have successfully tackled these knowledge-to-practice challenges.

    At this point in thinking, before concrete development and implementation have taken place, we have the potential for a multitude of possible future repositories and approaches to sector evaluation. We welcome TASO and HEAT’s offer to consult with the sector over the spring as they develop their HEEL and hope to engage in a broad and wide-ranging discussion of how we can collectively design an evaluation and evidence repository that is not just about collecting together artefacts, but which could play an active role in driving impactful practice. And then we can start talking about how the repository can be evaluated.

    John Blake will be talking all things evaluation with members of the Evaluation Collective on the 11th March. Sign up to the EC membership for more details: https://evaluationcollective.wordpress.com/

    Source link