Tag: Framework

  • Applying the Moral Intensity Framework: Ethical Decision-Making for University Reopening During COVID-19

    Applying the Moral Intensity Framework: Ethical Decision-Making for University Reopening During COVID-19

    by Scott McCoy, Jesse Pietz and Joseph H Wilck

    Overview

    In late 2020, universities faced a moral and operational crisis: Should they reopen for in-person learning amid a global pandemic? This decision held profound ethical implications, touching on public health, education, and institutional survival. Using the Moral Intensity Framework (MIF), a multidimensional ethical decision-making model, researchers analysed the reopening choices of 62 US universities to evaluate the ethical considerations and outcomes. Here’s how MIF provides critical insights into this complex scenario.

    Why the Moral Intensity Framework matters

    The Moral Intensity Framework helps assess ethical decisions based on six dimensions:

    1. Magnitude of Consequences: The severity of potential outcomes.
    2. Social Consensus: Agreement on the morality of the decision.
    3. Probability of Effect: Likelihood of outcomes occurring.
    4. Temporal Immediacy: Time between the decision and its consequences.
    5. Proximity: Emotional or social closeness to those affected.
    6. Concentration of Effect: Impact on specific groups versus broader populations.

    This framework offers a structured approach to evaluate ethical trade-offs, especially in high-stakes, uncertain scenarios like the COVID-19 pandemic.

    Universities’ dilemma: in-person -v- remote learning

    The reopening debate boiled down to two primary considerations:

    1. Educational and Financial Pressures: Universities needed to deliver on their educational mission while addressing steep revenue losses from tuition, housing, and auxiliary services. Remote learning threatened educational quality and the financial viability of institutions, especially those with limited endowments.
    2. Public Health Risks: Reopening campuses risked COVID-19 outbreaks, jeopardising the health of students, staff, and surrounding communities. Universities also faced backlash for potential spread to vulnerable populations.

    Critical Findings Through the Moral Intensity Lens

    Magnitude of Consequences

    Reopening for in-person learning presented stark risks: potential illness or death among students, staff, and the community. However, keeping campuses closed threatened jobs, reduced education quality, and caused financial strain. The scale of harm from reopening was considered higher, particularly in densely populated campus settings.

    Social Consensus

    Public opinion and government policies influence decisions. States with stringent public health mandates leaned toward remote learning, while those with lenient regulations often pursued in-person or hybrid models. Administrators balanced community sentiment with institutional needs, highlighting the importance of localized consensus.

    Temporal Immediacy

    Health risks from in-person learning manifested quickly, while financial and educational setbacks from remote learning had longer timelines. This immediacy added ethical weight to public health considerations in reopening decisions.

    Probability of Effect

    The uncertainty surrounding COVID-19 transmission and mitigation complicated ethical judgments. Universities needed more data on the effectiveness of safety protocols, making probability assessments challenging.

    Proximity and Concentration of Effect

    Campus communities are close-knit, amplifying the emotional weight of decisions. Both reopening and remaining remote affected broad populations similarly, lessening these dimensions’ influence.

    Ethical Outcomes and Practical Mitigation Strategies

    Many universities implemented extensive safety measures to align reopening decisions with ethical standards:

    • Testing and Tracing: Pre-arrival testing, on-campus surveillance, and contact tracing reduced outbreak risks.
    • Modified Learning Environments: Hybrid and remote options ensured flexibility, accommodating vulnerable populations.
    • Health Protocols: Social distancing, mask mandates, and enhanced cleaning protocols were widely adopted.

    Despite risks, universities that reopened often avoided large-scale outbreaks, demonstrating the effectiveness of these measures.

    Lessons for Crisis Management

    The COVID-19 reopening experience offers valuable lessons for future crises:

    1. Use Multidimensional Ethical Frameworks: Applying tools like MIF provides structure to navigate complex moral dilemmas.
    2. Prioritize Stakeholder Engagement: Balancing diverse perspectives helps bridge gaps between perceived and actual risks.
    3. Adapt Quickly: Flexibility in implementing mitigation strategies can mitigate harm while achieving core objectives.
    4. Build Resilience: Strengthening financial reserves and digital infrastructure can reduce future vulnerabilities.

    Global Implications

    While this analysis focused on U.S. universities, the findings have worldwide relevance. Institutions globally grappled with similar decisions, balancing public health and education amid diverse cultural and political contexts. The Moral Intensity Framework offers a universal lens to evaluate ethical challenges in higher education and beyond.

    Conclusion

    The reopening decisions of universities during COVID-19 exemplify the intricate balance of ethical, financial, and operational considerations in crisis management. The Moral Intensity Framework provided a robust tool for understanding these complexities, highlighting the need for structured ethical decision-making in future global challenges.

    This blog is based on an article published in Policy Reviews in Higher Education (online 20 September 2024) https://www.tandfonline.com/doi/full/10.1080/23322969.2024.2404864.

    Scott McCoy is the Vice Dean for Faculty & Academic Affairs and the Richard S. Reynolds, Jr. Professor of Business at William & Mary’s Raymond A. Mason School of Business.  His research interests include human computer interaction, social media, online advertising, and teaching assessment.

    Jesse Pietz is a faculty lead for the OMSBA program at William & Mary’s Raymond A. Mason School of Business.  He has been teaching analytics, operations research, and management since 2013.  His most recent faculty position prior to William & Mary was at the U.S. Air Force Academy in Colorado Springs, Colorado. 

    Joseph Wilck is Associate Professor of the Practice and Business Analytics Capstone Director
    Kenneth W. Freeman College of Management, Bucknell University He has been teaching analytics, operations research, data science, and engineering since 2006. His research is in the area of applied optimization and analytics.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Leave to Achieve?: A new framework for universities to drive local social mobility

    Leave to Achieve?: A new framework for universities to drive local social mobility

    • By Dani Payne, Senior Researcher and Education Lead at the Social Market Foundation.

    University remains the most effective pathway for disadvantaged individuals to achieve upward social mobility. Graduates earn more, are less likely to be unemployed, and report higher levels of health, happiness and civic engagement. Yet, despite this individual impact, higher education’s benefits often fail to translate into positive outcomes for local communities.

    Recent research from the Sutton Trust ranked constituencies by social mobility. Most interesting is the bottom 20. More than half have at least one university within their immediate locality, and some have as many as 18 in their wider region. Essentially, having a university – or, indeed, many universities – in your region doesn’t guarantee improved local social mobility.

    The need for a new social mobility framework

    The government’s ‘opportunity mission’ is built on the principle that every child, in every community, should have a fair chance to succeed.

    But rising costs, frozen maintenance support, demographic shifts and widening attainment gaps threaten progress made on access. Moreover, targets tend to be institution-specific, creating duplications and silos, and encouraging competition between providers. Selective universities continue to meet access targets by disproportionately recruiting disadvantaged pupils from high-attaining London boroughs, leaving local disadvantaged learners behind – even when world-class institutions are right on their doorstep.

    We must broaden how we assess universities’ social mobility impact. To be able to understand when, why and how the benefits of an institution do or don’t reach into local communities, we must also consider their roles as major employers, civic actors and research hubs.  

    In our new report, Leave to Achieve?, we set out a new framework for how universities can conceptualise and measure their local social mobility contribution. The framework consists of four key pillars, underpinned by the need for regional collaboration and long-term planning.

    1. Educational opportunities for local people

    Access to higher education varies starkly by region: 27% of disadvantaged pupils in London hold an undergraduate degree by age 22, compared to just 10% in the South West.

    Universities must work with local schools and colleges to raise attainment and create alternative entry pathways. They should be considering the extent to which they nurture and recruit talent locally, supporting pupils to progress and succeed. A place-based approach to widening participation, developed collaboratively with other regional providers, ensures local talent is not just nurtured but retained.

    Some existing initiatives show promise. Durham Inspired North East Scholarships, Middlesex’s guaranteed offer scheme for local applicants, and the Warwick Scholar’s program providing financial, academic and practical support to local disadvantaged pupils, all show how targeted programs can work at a local level. However, articulation agreements with local further education providers are underutilised in England, and inconsistent contextual admissions policies limit impact.  

    2. Good jobs for local people

    Universities are often the largest, or among the largest, employers in the local region. This is often cited to give the impression that they are ‘too big to fail’, particularly in the current financial context. But little has been done to look at the extent to which universities are providing good jobs to local people, and whether these are open to people from different socioeconomic backgrounds.

    Academic roles provide an opportunity for social mobility – for those who can secure one. For someone from a lower socioeconomic background to become a lecturer, for example, they have almost certainly experienced upwards occupational social mobility, if not also absolute (income) social mobility, too. Similarly, professional service roles are often well paid and secure, with a reasonable pension, and working within a university comes with a certain amount of cultural and social prestige, too.

    A university performing strongly in this area would be spearheading initiatives to support local people from disadvantaged backgrounds into some of these roles and supporting staff from lower socioeconomic backgrounds whilst they are there. Southampton’s staff social mobility network stands out here, specifically recognising and seeking to tackle barriers in recruitment, retention and career progress for those from working-class backgrounds.  

    3. Using research to address local needs

    Research within institutions should address local needs and tackle inequalities, with outputs shared with local communities. Local residents should have opportunities to be involved in research and should understand why research carried out in their region is valuable.

    There are excellent examples in this area, such as UWE Bristol’s ‘Engagement with Education‘ programme and London Metropolitan’s participatory knowledge exchange projects. But these remain examples of best – not yet standard – practice.

    4. Civic actors: Lead locally, collaborate regionally  

    As civic institutions, universities must be more deeply integrated within their localities. Despite growing attention to civic engagement, activity is often fragmented and lacking an overarching strategy. Participation in local skills planning is inconsistent, and incentives to foster collaboration across providers are weak.

    Great Manchester’s Civic Agreement is a great example of universities coming together with local leaders to work towards shared goals, recognising that collaboration is far more effective than competition, duplication, or silos. The South West Social Mobility Commission takes this a step further, bringing together all education providers (not just higher education), businesses, local leaders and third-sector organisations to promote better social mobility in the region.

    A call to action

    This framework is not a checklist, but a tool for reflection. We do not expect every institution to be a star performer in every pillar, but we do see value in measuring impact more holistically, across the full range of university activity.

    Universities should ask themselves:

    • Are we reaching local disadvantaged students?
    • Are we getting local people into good jobs, and are these jobs available to those from all social class backgrounds?
    • Is our research making a tangible difference to local challenges?
    • Are we truly embedded as civic leaders in our region?

    Only by addressing these questions can we begin to understand how – and when – the presence of a university does improve social mobility in its immediate communities. And only then can we ensure that local people no longer feel that they must leave in order to achieve.

    Source link

  • A new regulatory framework is more than Medr by numbers

    A new regulatory framework is more than Medr by numbers

    Medr, the new-ish regulator of tertiary education in Wales, is consulting on its new regulatory system (including conditions of registration and funding, and a quality framework).

    You have until 5pm 18 July 2025 to offer comments on any of the many ideas or potential requirements contained within – there’s also two consultation events to look forward to in early June.

    Regulatory approach

    As we are already aware from the strategy, Medr intends to be a principles-based regulator (learning, collaboration, inclusion, excellence) but this has been finessed into a regulatory philosophy that:

    integrates the strengths of both rules-based (compliance) and outcome-based regulation (continuous improvement)

    As such we also get (in Annex A) a set of regulatory principles that can support this best-of-both-worlds position. The new regulator commits to providing clear guidance and resources, transparent communication, minimising burden, the collaborative development of regulations and processes, regular engagement, proactive monitoring, legal and directive enforcement action, the promotion of best practice, innovation and “responsiveness”, and resilience.

    That’s what the sector gets, but this is a two way thing. In return Medr expects you to offer a commitment to compliance and integrity, to engage with the guidance, act in a transparent way (regarding self-reporting of issues – a “no alarms and no surprises” approach), practice proactive risk management and continuous improvement, collaborate with stakeholders, and respect the authority of Medr and its interventions.

    It’s all nicely aspirational, and (with half an eye on a similar regulator just over Offa’s Dyke) one appropriately based on communication and collaboration. Whatever Medr ends up being, it clearly does not want an antagonistic or suspicious relationship with the sector it regulates.

    Getting stuck in

    The majority of the rest of Annex A deals directly with when and where Medr will intervene. Are you even a regulator if you can’t step in to sort out non-compliance and other outbreaks of outright foolishness? Medr will have conditions of registration and conditions of funding, both of which have statutory scope for intervention – plus other powers to deal with providers it neither registers nor funds (“external providers”, which include those involved in franchise and partnership activities, and are not limited to those in Wales).

    Some of these powers are hangovers from the Higher Education (Wales) 2015 Act, which are already in force – the intention is that the remaining (Tertiary Education and Research Act 2022) powers will largely kick off from 1 August 2026, alongside the new conditions of funding. At this point the TERA 22 powers will supersede the relevant remaining HEW 2015 provision.

    The spurs to intervention are familiar from TERA. The decision to intervene will be primarily based on six factors: seriousness, persistence, provider actions, context, risk, and statutory duties – there’s no set weight accorded to any of them, and the regulator reserves the right to use others as required.

    A range of actions is open in the event of an infraction – ranging from low-level intervention (advice and assistance) to removal from the register and withdrawal of funding. In between these you may see enhanced monitoring, action plans, commissioned reports and other examples of what is euphemistically termed “engagement”. A decision to intervene will be communicated “clearly” to a provider, and Medr “may decide” to publish details of interventions – balancing the potential risks to the provider against the need to promote compliance.

    Specific ongoing registration conditions are also a thing – for registered providers only, obviously – and all of these will be published, as will any variation to conditions. The consultation document bristles with flowcharts and diagrams, setting out clearly the scope for review and appeal for each type of appeal.

    One novelty for those familiar with the English system is the ability of the regulator to refer compliance issues to Welsh Ministers – this specifically applies to governance issues or where a provider is performing “significantly less well than it might in all the circumstances be reasonably expected to perform, or is failing or likely to fail to give an acceptable standard of education or training”. That’s a masterpiece of drafting which offers a lot of scope for government intervention.

    Regulatory framework

    Where would a regulator be without a regulatory framework? Despite a lot of other important aspects in this collection of documents, the statement of conditions of registration in Annex B will likely attract the most attention.

    Financial sustainability is front and centre, with governance and management following close behind. These two also attract supplemental guidance on financial management, financial commitment thresholds, estates management, and charity regulation. Other conditions include quality and continuous improvement, regard to advice and guidance, information provided to prospective students, fee limits, notifications of changes, and charitable status – and there’s further supplemental guidance on reportable events.

    Medr intends to be a risk-based regulator too – and we get an overview of the kinds of monitoring activity that might be in place to support these determinations of risk. There will be an annual assurance return for registered providers, which essentially assures the regulator that the provider’s governing body has done its own assurance of compliance. The rest of the returns are listed as options, but we can feel confident in seeing a financial assurance return, and various data returns, as core – with various other documentation requested on a more adhoc basis.

    And – yes – there will be reportable events: serious incidents that must be reported within five working days, notifiable (less serious) stuff on a “regular basis”. There’s a table in annex B (table 1) but this is broad and non-exhaustive.

    There’s honestly not much in the conditions of registration that is surprising. It is notable that Medr will still need to be told about new financial commitments, either based on a threshold or while in “increased engagement”, and a need to report when it uses assets acquired using public funds as security on financial commitments (it’s comforting to know that exchequer interest is still a thing, in Wales at least).

    The quality and continuous improvement condition is admirably broad – covering the involvement of students in quality assurance processes, with their views taken into account (including a requirement for representation on governing bodies). Responsibility for quality is expected to go all the way up to board level, and the provider is expected to actively engage with external quality assurance. Add in continuous improvement and an expectation of professional development for all staff involved in supporting students and you have an impressively robust framework.

    We need also to discuss the meaning of “guidance” within the Medr expanded universe – providers need to be clear about how they have responded to regulatory guidance and justify any deviation. There’s a specific condition of registration just for that.

    Quality framework

    Annex C provides a quality framework, which underpins and expands on the condition of registration. Medr has a duty to monitor and promote improvement in the quality and standards of quality in tertiary education, and the option in TERA 2022 to publish a framework like this one. It covers the design and delivery of the curriculum, the quality of support offered to learners, arrangements to promote active learner engagement (there’s a learner engagement code out for consultation in the autumn), and the promotion of wellbeing and welfare among learners.

    For now, existing monitoring and engagement plans (Estyn and the QAA) will continue, although Medr has indicated to both that it would like to see methodologies and approaches move closer together across the full regulatory ambit. But:

    In due course we will need to determine whether or not we should formally designate a quality body to assess higher education. Work on this will be carried out to inform the next cycle of external quality assessments. We will also consider whether to adopt a common cycle length for the assessment of all tertiary education.

    There is clarity that the UK Quality Code applies to higher education in Wales, and that internal quality assurance processes need to align to the European Standards and Guidelines for Quality Assurance (ESG) – external quality assurance arrangements currently do, and will continue to, align with ESG as well.

    To follow

    Phase two of this series of consultations will come in October 2025 – followed by registrations opening in the spring of 2026 with the register launched in August of that year. As we’ve seen, bits of the conditions of registration kick in from 1 August 2027 – at which point everything pre-Medr fades into the storied history of Welsh tertiary education.

    Source link

  • We Already Have an Ethics Framework for AI (opinion)

    We Already Have an Ethics Framework for AI (opinion)

    For the third time in my career as an academic librarian, we are facing a digital revolution that is radically and rapidly transforming our information ecosystem. The first was when the internet became broadly available by virtue of browsers. The second was the emergence of Web 2.0 with mobile and social media. The third—and current—results from the increasing ubiquity of AI, especially generative AI.

    Once again, I am hearing a combination of fear-based thinking alongside a rhetoric of inevitability and scoldings directed at those critics who are portrayed as “resistant to change” by AI proponents. I wish I were hearing more voices advocating for the benefits of specific uses of AI alongside clearheaded acknowledgment of risks of AI in specific circumstances and an emphasis on risk mitigation. Academics should approach AI as a tool for specific interventions and then assess the ethics of those interventions.

    Caution is warranted. The burden of building trust should be on the AI developers and corporations. While Web 2.0 delivered on its promise of a more interactive, collaborative experience on the web that centered user-generated content, the fulfillment of that promise was not without societal costs.

    In retrospect, Web 2.0 arguably fails to meet the basic standard of beneficence. It is implicated in the global rise of authoritarianism, in the undermining of truth as a value, in promoting both polarization and extremism, in degrading the quality of our attention and thinking, in a growing and serious mental health crisis, and in the spread of an epidemic of loneliness. The information technology sector has earned our deep skepticism. We should do everything in our power to learn from the mistakes of our past and do what we can to prevent similar outcomes in the future.

    We need to develop an ethical framework for assessing uses of new information technology—and specifically AI—that can guide individuals and institutions as they consider employing, promoting and licensing these tools for various functions. There are two main factors about AI that complicate ethical analysis. The first is that an interaction with AI frequently continues past the initial user-AI transaction; information from that transaction can become part of the system’s training set. Secondly, there is often a significant lack of transparency about what the AI model is doing under the surface, making it difficult to assess. We should demand as much transparency as possible from tool providers.

    Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements.

    The Belmont Report articulates three primary ethical principles:

    1. Respect for persons
    2. Beneficence
    3. Justice

    “Respect for persons,” as it’s been translated into U.S. code and practiced by IRBs, has several facets, including autonomy, informed consent and privacy. Autonomy means that individuals should have the power to control their engagement and should not be coerced to engage. Informed consent requires that people should have clear information so that they understand what they are consenting to. Privacy means a person should have control and choice about how their personal information is collected, stored, used and shared.

    Following are some questions we might ask to assess whether a particular AI intervention honors autonomy.

    • Is it obvious to users that they are interacting with AI? This becomes increasingly important as AI is integrated into other tools.
    • Is it obvious when something was generated by AI?
    • Can users control how their information is harvested by AI, or is the only option to not use the tool?
    • Can users access essential services without engaging with AI? If not, that may be coercive.
    • Can users control how information they produce is used by AI? This includes whether their content is used to train AI models.
    • Is there a risk of overreliance, especially if there are design elements that encourage psychological dependency? From an educational perspective, is using an AI tool for a particular purpose likely to prevent users from learning foundational skills so that they become dependent on the model?

    In relation to informed consent, is the information provided about what the model is doing both sufficient and in a form that a person who is neither a lawyer nor a technology developer can understand? It is imperative that users be given information about what data is going to be collected from which sources and what will happen to that data.

    Privacy infringement happens either when someone’s personal data is revealed or used in an unintended way or when information thought private is correctly inferred. When there is sufficient data and computing power, re-identification of research subjects is a danger. Given that “de-identification of data” is one of the most common strategies for risk mitigation in human subjects’ research, and there is an increasing emphasis on publishing data sets for the purposes of research reproducibility, this is an area of ethical concern that demands attention. Privacy emphasizes that individuals should have control over their private information, but how that private information is used should also be assessed in relation to the second major principle—beneficence.

    Beneficence is the general principle that says that the benefits should outweigh the risks of harm and that risks should be mitigated as much as possible. Beneficence should be assessed on multiple levels—both the individual and the systemic. The principle of beneficence demands that we pay particularly careful attention to those who are vulnerable because they lack full autonomy, such as minors.

    Even when making personal decisions, we need to think about potential systemic harms. For example, some vendors offer tools that allow researchers to share their personal information in order to generate highly personalized search results—increasing research efficiency. As the tool builds a picture of the researcher, it will presumably continue to refine results with the goal of not showing things that it does not believe are useful to the researcher. This may benefit the individual researcher. However, on a systemic level, if such practices become ubiquitous, will the boundaries between various discourses harden? Will researchers doing similar scholarship get shown an increasingly narrow view of the world, focused on research and outlooks that are similar to each other, while researchers in a different discourse are shown a separate view of the world? If so, would this disempower interdisciplinary or radically novel research or exacerbate disciplinary confirmation bias? Can such risks be mitigated? We need to develop a habit of thinking about potential impacts beyond the individual in order to create mitigations.

    There are many potential benefits to certain uses of AI. There are real possibilities it can rapidly advance medicine and science—see, for example, the stunning successes of the protein structure database AlphaFold. There are corresponding potentialities for swift advances in technology that can serve the common good, including in our fight against the climate crisis. The potential benefits are transformative, and a good ethical framework should encourage them. The principle of beneficence does not demand that there are no risks, but that we should identify uses where the benefits are significant and that we mitigate the risks, both individual and systemic. Risks can be minimized by improving the tools, such as work to prevent them from hallucinating, propagating toxic or misleading content, or delivering inappropriate advice.

    Questions of beneficence also require attention to environmental impacts of generative AI models. Because the models require vast amounts of computing power and, therefore, electricity, using them taxes our collective infrastructure and contributes to pollution. When analyzing a particular use through the ethical lens of beneficence, we should ask whether the proposed use provides enough likely benefit to justify the environmental harm. Use of AI for trivial purposes arguably fails the test for beneficence.

    The principle of justice demands that the people and populations who bear the risks should also receive the benefits. With AI, there are significant equity concerns. For example, generative AI may be trained on data that includes our biases, both current and historic. Models must be rigorously tested to see if they create prejudicial or misleading content. Similarly, AI tools should be closely interrogated to ensure that they do not work better for some groups than for others. Inequities impact the calculations of beneficence and, depending on the stakes of the use case, could make the use unethical.

    Another consideration in relation to the principle of justice and AI is the issue of fair compensation and attribution. It is important that AI does not undermine creative economies. Additionally, scholars are important content producers, and the academic coin of the realm is citations. Content creators have a right to expect that their work will be used with integrity, will be cited and that they will be remunerated appropriately. As part of autonomy, content creators should also be able to control whether their material is used in a training set, and this should, at least going forward, be part of author negotiations. Similarly, the use of AI tools in research should be cited in the scholarly product; we need to develop standards about what is appropriate to include in methodology sections and citations, and possibly when an AI model should be granted co-authorial status.

    The principles outlined above from the Belmont Report are, I believe, sufficiently flexible to allow for further and rapid developments in the field. Academia has a long history of using them as guidance to make ethical assessments. They give us a shared foundation from which we can ethically promote the use of AI to be of benefit to the world while simultaneously avoiding the types of harms that can poison the promise.

    Gwendolyn Reece is the director of research, teaching and learning at American University’s library and a former chair of American’s institutional review board.

    Source link

  • The value of having a National Learning Framework incorporating school, college and higher education

    The value of having a National Learning Framework incorporating school, college and higher education

    By Michelle Morgan, Dean of Students at the University of East London.

    In the UK, we have a well-established education system across different levels of learning including primary, secondary, further and higher education. For each level, there is a comprehensive structure that is regulated and monitored alongside extensive information. However, at present, they generally function in isolation. 

    The Government’s recent Curriculum and Assessment Review has asked for suggestions to improve the curriculum and assessment system for the 16-19 year study group. This group includes a range of qualifications including GCSEs, A-levels, BTECs, T Levels and apprenticeships. The main purpose of the Review is to

    ensure that the curriculum balances ambition, relevance, flexibility and inclusivity for all children and young people.

    However, as part of this review, could it also look at how the different levels of study build on one another? Could the sectors come together and use their extensive knowledge for their level and type of study, to create an integrated road map across secondary, further and higher education where skills, knowledge, competencies and attributes (and how they translate into employability skills) are clearly articulated? We could call this a National Learning Framework. It could align with the learning gain programme led by the Office for Students (OfS).

    The benefits of a National Learning Framework

    There would be a number of benefits to adopting this approach:

    • It would provide a clear resource for all stakeholders, including students and staff in educational organisations, policymakers, Government bodies, Regulators and Quality Standard bodies (such as Ofsted, the Office for Students and QAA) and business and industry. It would also help manage the general public perception of higher education. 
    • This approach would join up the regulatory bodies responsible for the different sectors. It would help create a collaborative, consistent learning and teaching approach, by setting and explaining the aims and objectives of the various types of education providers.
    • It would explain and articulate the differences in learning, teaching and assessment approaches across the array of secondary and further education qualifications that are available and used as progression qualifications into higher education.  For example, A-Levels are mainly taught in schools and assessed by end-of-year exams. ‘Other’ qualifications such as BTEC, Access and Other Level 3 qualifications taught in college have more diverse assessments.
    • It would help universities more effectively bridge the learning and experience transition into higher education across all entry qualifications.  We know students from the ‘Other’ qualification groups are often from disadvantaged backgrounds, which can affect retention, progression and success at university as research highlights (see also this NEON report).  Students with other qualifications are more likely to withdraw than those with A-Levels. However, as this recent report Prior learning experience, study expectations of A-Level and BTEC students on entry to university highlights, it is not the BTEC qualification per se that is the problem but the transition support into university study that needs improvement.
    • It would also address assumptions about how learning occurs at each level of study. For example, because young people use media technology to live and socialise, it is assumed the same is the case with learning. Accessing teaching and learning material, especially in schools, remains largely traditional: the main sources of information are course textbooks and handwritten notes, although since the Covid-19 Pandemic, the use of coursework submission and basic virtual learning environments (VLEs) is on the increase.
    • If we clearly communicate to students the learning that occurs throughout each level of their study, and what skills, knowledge, competencies and attributes they should obtain as a result, this can help with their confidence levels and their employability opportunities as they can better articulate what they have achieved.

    What could an integrated learning approach across all levels of study via a National Learning  Framework look like?

    The  Employability Skills Pyramid created for levels 4 to 7 in higher education with colleagues in a previous university where I worked could be extended to include Levels 2/3 and apprenticeships to create a National Learning Framework. The language used to construct the knowledge, skills and attribute grids used by course leaders purposely integrated the QAA statements for degrees (see accompanying document Appendix 1) .

    By adding Levels 2 and 3, including apprenticeship qualifications and articulating the differences between each qualification, the education sector could understand what is achieved within and between different levels of study and qualifications (see Figure 1).

    Key stakeholders could come together from across all levels of study to map out and agree on the language to adopt for consistency across the various levels and qualifications.

    Integrated National Learning Framework across Secondary, Further and Higher Education

    Alongside the National Learning Framework, a common transition approach drawing on the same definitions across all levels of study would be valuable. Students and staff could gain the understanding required to foster successful transitions between phases.  An example is provided below.

    Supporting transitions across the National Learning Framework using similar terminology

    The Student Experience Transitions (SET) Model was designed to support courses of various lengths and make the different stages of a course clearer. It was originally designed for higher education but the principles are the same across all levels of study (see Figure 2). Students need to progress through each stage which has general rules of engagement. The definitions of each stage and the mapping of each stage by length of course are in the accompanying document in Appendix 2.

    Figure 2: The Student Experience Transitions Model. Source: Morgan 2012

    The benefits for students are consistency and understanding what is expected for their course. At each key transition stage, students would understand what is expected by reflecting on what they have previously learnt, how the coming year builds on what they already know and what they will achieve at the end.

    Taking the opportunity to integrate

    The Curriculum Review provides a real opportunity to join up each level of study and provide clarity for all stakeholders. Importantly, a National Learning Framework could provide and help with the Government’s aims of balancing ambition, relevance, flexibility and inclusivity for all learners regardless of level of study.

    Appendices

    Source link

  • Department of Labor Publishes AI Framework for Hiring Practices

    Department of Labor Publishes AI Framework for Hiring Practices

    by CUPA-HR | October 16, 2024

    On September 24, the Department of Labor (DOL), along with the Partnership on Employment & Accessible Technology (PEAT), published the AI & Inclusive Hiring Framework. The framework is intended to be a tool to support the inclusive use of artificial intelligence in employers’ hiring technology, specifically for job seekers with disabilities.

    According to DOL, the framework was created in support of the Biden administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Issued in October 2023, the executive order directed the Secretary of Labor, along with other federal agency officials, to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Notably, it also directed DOL to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.

    The new AI Framework includes 10 focus areas that cover issues impacting the recruitment and hiring of people with disabilities and contain information on maximizing the benefit of using and managing the risks associated with assessing, acquiring and employing AI hiring technology.

    The 10 focus areas are:

    1. Identify Employment and Accessibility Legal Requirements
    2. Establish Roles, Responsibilities and Training
    3. Inventory and Classify the Technology
    4. Work with Responsible AI Vendors
    5. Assess Possible Positive and Negative Impacts
    6. Provide Accommodations
    7. Use Explainable AI and Provide Notices
    8. Ensure Effective Human Oversight
    9. Manage Incidents and Appeals
    10. Monitor Regularly

    Under each focus area, DOL and PEAT provide key practices and considerations for employers to implement as they work through the AI framework. It is important to note, however, that the framework does not have force of law and that employers do not need to implement every practice or goal for every focus area at once. The goal of the framework is to lead employers to inclusive practices involving AI technology over time.

    DOL encourages HR personnel — along with hiring managers, DEIA practitioners, and others — to familiarize themselves with the framework. CUPA-HR will keep members apprised of any future updates relating to the use of AI in hiring practices and technology.



    Source link