Tag: national

  • Resilience is a matter of national health

    Resilience is a matter of national health

    With ongoing shortages of some 40,000 nurses and a 26 per cent drop in applicants to nursing degree courses in the last two years the staffing crisis in the NHS is set to get more acute.

    There is the backdrop of strikes, the legacy of Covid, low pay, the costs of studying along with the cost of living crisis.

    It is, perhaps, little wonder that around 12 per cent of nursing students in England fail to complete their degrees – twice the average undergraduate drop out rate. As health students tell us, “there are times when the NHS is not a nice place to be.”

    The constant cycle of coursework and clinical placements is “a treadmill, hard graft.” Students talk about feeling isolated, particularly during placements.

    The pressure to succeed and the fear of judgment from peers and professionals over not being able to “tough it out” can get in the way of students accessing support. The emotional toll of the work, coupled with the expectation to maintain a brave face, leads to compassion fatigue, burnout and a sense of depersonalisation.

    “It’s not,” students tell us, “what I thought it would be.”

    The resilience narrative

    Of course, the notion that healthcare is inherently tough and that only the most resilient can survive is not new. In fact, it’s something of a badge of honour.

    As one student told us, “there is this echo chamber. Students all telling each other about how tough it is, about the pressure, the volume of work, how it is non-stop and overwhelming.”

    But tying students’ worth to their ability to withstand adversity, that it is up to them to make up for something lacking in themselves instead of focusing on their capacity to thrive and grow, can be disempowering and debilitating.

    It’s time to change this corrosive resilience narrative, to bury the notion that it is the student who is somehow coming up short, who needs fixing. Resilience is not about survival and just getting through. It’s about coming back from set backs and thriving. It is about learning and growing. And it’s about something that is fostered within a supportive community rather than an ordeal endured alone by every student.

    So resilience becomes about putting in place support, about gathering what you need to be a success instead of simply finding a lifeline in a crisis.

    It is community that becomes a building block of resilience: the pro-active building of strong networks among students that enable and encourage them to support each other; building a wider support network of academic staff, supervisors in placements, of family and friends. It is here you find fresh perspective, the space to come back from setbacks.

    A midwifery student describes the: “WhatsApp group to keep in touch, check in and support each other. We’ve got a real sense of community;” a nursing student talks about how “it turned out that other students were just as terrified and felt like they were starting from scratch with every new placement.

    Sharing our feelings and experiences really helped normalise them;” and the medical student who suddenly “realised that everyone else was struggling. I wasn’t the only one who didn’t have confidence in themself and their abilities.”

    And by challenging negative interpretations of themselves, the “I can’t do it”, “I don’t belong”, “I’m the only one who’s struggling,” students begin to see new choices. Resilience becomes about developing the sense of agency and the confidence to respond differently, to challenge, to get the support you need to navigate towards your own definition of success.

    What matters

    So, to be resilient also means making the space to reflect on what truly matters to you when the norm, as a health student, is to focus only on the patients.

    Our medical student talks about how:

    …I spend a lot of time focused on looking after others and have seen myself as a low priority. This lack of self care used to result in things building up to breaking point. I needed a place to reflect, away from all the academic pressures. A time to focus on myself.

    It can take courage to do different, to do what is right for you rather then what people expect you to do. It takes courage not to join in with the prevailing culture when it doesn’t work for you. So resilience is also about bravery.

    The midwifery student again:

    I’m stopping negative experiences being the be all and end all of my experience.

    Disruptors and modellers

    What we’re talking about here is a cultural shift, about redefining the resilience narrative so it is about enabling students to discover their strengths and navigate their challenges with confidence.

    The role of staff is critical – as disruptors of the prevailing narrative in healthcare; in modelling behaviour; and re-inventing their everyday interactions with the practitioners of tomorrow.

    By using coaching tools and techniques, those of whose job it is to support students can:

    • Create a supportive environment that mitigates against self-stigma and provides students with permission and opportunities to be proactive in disclosing needs and unconditional reassurance that they feel they will be heard and valued;
    • Work in relationship with the whole student, supporting students to reflect on who they are and where they are going, and to make courageous choices;
    • Foster a sense of community to create a more supportive and effective learning environment

    We know there are places where this work has already getting results.

    A Clinical Skills Tutor describes how this approach:

    …has made me rethink my relationship with students, opened me up to working with students in a way I’d not thought about. I’ve seen how empowering it can be. I’m much more effective at making sure they get the support they need.

    Empowering students to redefine “resilience” on their own terms makes it a platform for learning and growth, rather than a burden to bear. There are more likely to succeed in their studies and will be better prepared for the challenges in their professional lives.

    As our student nurse puts it:

    “Grit turns your thinking on its head. I’ve been happier, calmer, better able to cope. I ask for help and support when I need it. I don’t bottle things up to breaking point. Things just don’t get to crisis point any more.

    Source link

  • The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    The National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to allocate research funding — here’s what they should do instead

    In December, The Wall Street Journal reported:

    [President-elect Donald Trump’s nominee to lead the National Institutes of Health] Dr. Jay Bhattacharya […] is considering a plan to link a university’s likelihood of receiving research grants to some ranking or measure of academic freedom on campus, people familiar with his thinking said. […] He isn’t yet sure how to measure academic freedom, but he has looked at how a nonprofit called Foundation for Individual Rights in Education scores universities in its freedom-of-speech rankings, a person familiar with his thinking said.

    We believe in and stand by the importance of the College Free Speech Rankings. More attention to the deleterious effect restrictions on free speech and academic freedom have on research at our universities is desperately needed, so hearing that they are being considered as a guidepost for NIH grantmaking is heartening. Dr. Bhattacharya’s own right to academic freedom was challenged by his Stanford University colleagues, so his concerns about its effect on NIH’s grants is understandable.

    However, our College Free Speech Rankings are not the right tool for this particular job. They were designed with a specific purpose in mind — to help students and parents find campuses where students are both free and comfortable expressing themselves. They were not intended to evaluate the climate for conducting academic research on individual campuses and are a bad fit for that purpose. 

    While the rankings assess speech codes that apply to students, the rankings do not currently assess policies pertaining to the academic freedom rights and research conduct of professors, who are the primary recipients of NIH grants. Nor do the rankings assess faculty sentiment about their campus climates. It would be a mistake to use the rankings beyond their intended purpose — and, if the rankings were used to deny funding for important research that would in fact be properly conducted, that mistake would be extremely costly.

    FIRE instead proposes three ways that would be more appropriate for NIH to use its considerable power to improve academic freedom on campus and ensure research is conducted in an environment most conducive to finding the most accurate results.

    1. Use grant agreements to safeguard academic freedom as a strong contractual right. 
    2. Encourage open data practices to promote research integrity.
    3. Incentivize universities to study their campus climates for academic freedom.

    Why should the National Institutes of Health care about academic freedom at all?

    The pursuit of truth demands that researchers be able to follow the science wherever it leads, without fear, favor, or external interference. To ensure that is the case, NIH has a strong interest in ensuring academic freedom rights are inviolable. 

    As a steward of considerable taxpayer money, NIH has an obligation to ensure it spends its funds on high-quality research free from censorship or other interference from politicians or college and university administrators.

    Why the National Institutes of Health shouldn’t use FIRE’s College Free Speech Rankings to decide where to send funds

    FIRE’s College Free Speech Rankings (CFSR) were never intended for use in determining research spending. As such, it has a number of design features that make it ill-suited to that purpose, either in its totality or through its constituent parts.

    Firstly, like the U.S. News & World Report college rankings, a key reason for the creation of the CFSRs was to provide information to prospective undergraduate students and their parents. As such, it heavily emphasizes students’ perceptions of the campus climate over the perceptions of faculty or researchers. In line with that student focus, our attitude and climate components are based on a survey of undergraduates. Additionally, the speech policies that we evaluate and incorporate into the rankings are those that affect students. We do not evaluate policies that affect faculty and researchers, which are often different and would be of greater relevance to deciding research funding. While it makes sense that there may be some correlation, we have no way of knowing whether or the degree to which that might be true.

    Secondly, for the component that most directly implicates the academic freedom of faculty, we penalize schools for attempts to sanction scholars for their protected speech, as tracked in our Scholars Under Fire database. While our Scholars Under Fire database provides excellent datapoints for understanding the climate at a university, it does not function as a systematic proxy for assessing academic freedom on a given campus as a whole. As one example, a university with relatively strong protection for academic freedom may have vocal professors with unpopular viewpoints that draw condemnation and calls for sanction that could hurt its ranking, while a climate where professors feel too afraid to voice controversial opinions could draw relatively few calls for sanction and thus enjoy a higher ranking. This shortcoming is mitigated when considered alongside the rest of our rankings components, but as discussed above, those other components mostly concern students rather than faculty.

    Thirdly, using CFSR to determine NIH funding could — counterintuitively — be abused by vigilante censors. Because we penalize schools for attempted and successful shoutdowns, the possibility of a loss of NIH funding could incentivize activists who want leverage over a university to disrupt as many events as possible in order to negatively influence its ranking, and thus its funding prospects. Even the threat of disruption could thus give censors undue power over a university administration that fears loss of funding.

    Finally, due to resource limitations, we do not rank all research universities. It would not be fair to deny funding to an unranked university or to fund an unranked university with a poor speech climate over a low-ranked university.

    Legal boundaries for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    While NIH has considerable latitude to determine how it spends taxpayer money, as an arm of the government, the First Amendment places restrictions on how NIH may use that power. Notably, any solution must not penalize institutions for protected speech or scholarship by students or faculty unrelated to NIH granted projects. NIH could not, for example, require that a university quash protected protests as a criteria for eligibility, or deny a university eligibility because of controversial research undertaken by a scholar who does not work on NIH-funded research.

    While NIH can (and effectively must) consider the content of applications in determining what to fund, eligibility must be open to all regardless of viewpoint. Even were this not the case as a constitutional matter (and it is, very much so), it is important as a prudential matter. People would be understandably skeptical of, if not downright disbelieve, scientific results obtained through a grant process with an obvious ideological filter. Indeed, that is the root of much of the current skepticism over federally funded science, and the exact situation academic freedom is intended to avoid.

    Additionally, NIH cannot impose a political litmus test on an individual or an institution, or compel an institution or individual to take a position on political or scientific issues as a condition of grant funding.

    In other words, any solution to improve academic freedom:

    • Must be viewpoint neutral;
    • Must not impose an ideological or political litmus test; and
    • Must not penalize an institution for protected speech or scholarship by its scholars or students.

    Guidelines for the National Institutes of Health as it considers proposals for actions to protect academic freedom

    NIH should carefully tailor any solution to directly enhance academic freedom and to further NIH’s goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science.” Going beyond that purpose to touch on issues and policies that don’t directly affect the conduct of NIH grant-funded research may leave such a policy vulnerable to legal challenge.

    Any solution should, similarly, avoid using vague or politicized terms such as “wokeness” or “diversity, equity, and inclusion.” Doing so creates needless skepticism of the process and — as FIRE knows all too well — introduces uncertainty as professors and institutions parse what is and isn’t allowed.

    Enforcement mechanisms should be a function of contractual promises of academic freedom, rather than left to apathetic accreditors or the unbounded whims of bureaucrats on campus or officials in government, for several reasons. 

    Regarding accreditors, FIRE over the years has reported many violations of academic freedom to accreditors who require institutions to uphold academic freedom as a precondition for their accreditation. Up to now, the accreditors FIRE has contacted have shown themselves wholly uninterested in enforcing their academic freedom requirements.

    When it comes to administrators, FIRE has documented countless examples of campus administrators violating academic freedom, either due to politics, or because they put the rights of the professor second to the perceived interests of their institution.

    As for government actors, we have seen priorities and politics shift dramatically from one administration to the next. It would be best for everyone involved if NIH funding did not ping-pong between ideological poles as a function of each presidential election, as the Title IX regulations now do. Dramatic changes to how NIH conceives as academic freedom with every new political administration would only create uncertainty that is sure to further chill speech and research.

    While the courts have been decidedly imperfect protectors of academic freedom, they have a better record than accreditors, administrators, or partisan government officials in parsing protected conduct from unprotected conduct. And that will likely be even more true with a strong, unambiguous contractual promise of academic freedom. Speaking of which…

    The National Institutes of Health should condition grants of research funds on recipient institutions adopting a strong contractual promise of academic freedom for their faculty and researchers

    The most impactful change NIH could enact would be to require as a condition of eligibility that institutions adopt strong academic freedom commitments, such as the 1940 Statement of Principles on Academic Freedom and Tenure or similar, and make those commitments explicitly enforceable as a contractual right for their faculty members and researchers.

    The status quo for academic freedom is one where nearly every institution of higher education makes promises of academic freedom and freedom of expression to its students and faculty. Yet only at public universities, where the First Amendment applies, are these promises construed with any consistency as an enforceable legal right. 

    Private universities, when sued for violating their promises of free speech and academic freedom, frequently argue that those promises are purely aspirational and that they are not bound by them (often at the same time that they argue faculty and students are bound by the policies). 

    Too often, courts accept this and universities prevail despite the obvious hypocrisy. NIH could stop private universities’ attempts to have their cake and eat it too by requiring them to legally stand by the promises of academic freedom that they so readily abandon when it suits them.

    NIH could additionally require that this contractual promise come with standard due process protections for those filing grievances at their institution, including:

    • The right to bring an academic freedom grievance before an objective panel;
    • The right to present evidence;
    • The right to speedy resolution;
    • The right to written explanation of findings including facts and reasons; and
    • The right to appeal.

    If the professor exhausts these options, they may sue for breach of the contract. To reduce the burden of litigation, NIH could require that, if a faculty member prevails in a lawsuit over a violation of academic freedom, the violating institution would not be eligible for future NIH funding until they pay the legal fees of the aggrieved faculty member.

    NIH could also study violations of academic freedom by creating a system for those connected to NIH-funded research to report violations of academic freedom or scientific integrity.

    It would further be proper for NIH to require institutions to eliminate any political litmus tests, such as mandatory DEI statements, as a condition of grant eligibility.

    The National Institutes of Health can implement strong measures to protect transparency and integrity in science

    NIH could encourage open science and transparency principles by heavily favoring studies that are pre-registered. Additionally, to obviate concerns that scientific results may be suppressed or buried because they are unpopular or politically inconvenient, NIH could require its grant-funded research to make available data (with proper privacy safeguards) following the completion of the project. 

    To help deal with the perverse incentives that have created the replication crisis and undermined public trust in science, NIH could create impactful incentives for work on replications and the publication of null results.

    Finally, NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.

    The National Institutes of Health can incentivize study into campus climates for academic freedom

    As noted before, FIRE’s College Free Speech Rankings focus on students. Due to logistical and resource difficulties surveying faculty, our 2024 Faculty Report looking into many of the same issues took much longer and had to be limited in scope to 55 campuses, compared to the 250+ in the CFSR. This is to say there is a strong need for research to understand faculty views and experiences on academic freedom. After all, we cannot solve a problem until we understand it. To that effect, NIH should incentivize further study into faculty’s academic freedom.

    It is important to note that these studies should be informational and not used in a punitive manner, or to decide on NIH funding eligibility. This is because tying something as important as NIH funding to the results of the survey would create so significant an incentive to influence the results that the data would be impossible to trust. Even putting aside malicious interference by administrators and other faculty members, few faculty would be likely to give honest answers that imperiled institutional funding, knowing the resulting loss in funding might threaten their own jobs.

    Efforts to do these kinds of surveys in Wisconsin and Florida proved politically controversial, and at least initially, led to boycotts, which threatened to compromise the quality and reliability of the data. As such, it’s critical that any such survey be carried out in a way that maximizes trust, under the following principles:

    • Ideally, the administration of these surveys should be done by an unbiased third party — not the schools themselves, or NIH. This third party should include respected researchers across the political spectrum and no partisan slant.
    • The survey sample must be randomized and not opt-in.
    • The questionnaire must be made public beforehand, and every effort should be made for the questions to be worded without any overt partisanship or ideology that would reduce trust.

    Conclusion: With great power…

    FIRE has for the last two decades been America’s premier defender of free speech and academic freedom on campus. Following Frederick Douglass’s wise dictum, “I would unite with anybody to do right and with nobody to do wrong,” we’ve worked with Democrats, Republicans, and everyone in between (and beyond) to advance free speech and open inquiry, and we’ve criticized them in turn whenever they’ve threatened these values.

    With that sense of both opportunity and caution, we would be heartened if NIH used its considerable power wisely in an effort to improve scientific integrity and academic freedom. But if wielded recklessly, that same considerable power threatens to do immense damage to science in the process. 

    We stand ready to advise if called upon, but integrity demands that we correct the record if we believe our data is being used for a purpose to which it isn’t suited.

    Source link

  • Students Explore STEM with Engineers

    Students Explore STEM with Engineers

    Middletown, PA – Phoenix Contact engineers head back into the classroom this week to teach sixth-grade science class at Middletown Area Middle School in Middletown, Pa. The classes are part of Phoenix Contact’s National Engineers Week celebration.

    Phoenix Contact has worked with the school every February since 2007. The engineers lead hands-on lessons that make science fun. The goal is to inspire young people to consider careers in science, technology, engineering, and math (STEM).

    The lessons include:

    • Building catapults
    • Racing cookie tins down ramps
    • Building an electric motor
    • Learning about static electricity with the Van de Graaff generator

    “Our engineering team created this outreach program many years ago, and the partnership with Middletown Area School District has stood the test of time,” said Patty Marrero, interim vice president of human relations at Phoenix Contact. “National Engineers Week is a special time for them to share their passion for technology with students. It’s also our chance to thank our engineers for the creativity and innovations that drive our company forward.”

    About Phoenix Contact

    Phoenix Contact is a global market leader based in Germany. Since 1923, Phoenix Contact has created products to connect, distribute, and control power and data flows. Our products are found in nearly all industrial settings, but we have a strong focus on the energy, infrastructure, process, factory automation, and e-mobility markets. Sustainability and responsibility guide every action we take, and we’re proud to work with our customers to empower a smart and sustainable world for future generations. Our global network includes 22,000 employees in 100+ countries. Phoenix Contact USA has headquarters near Harrisburg, Pa., and employs more than 1,100 people across the U.S.

    For more information about Phoenix Contact or its products, visit www.phoenixcontact.com, call technical service at 800-322-3225, or email [email protected].

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • UNSW purchases six state-of-the-art aircraft for School of Aviation amid national pilot shortage

    UNSW purchases six state-of-the-art aircraft for School of Aviation amid national pilot shortage

    Flight instructor Arjun Jogia with one of his young trainees Ariane Fouracre. Picture: Richard Dobson

    As Australia stares down the barrel of a looming pilot shortage, more than 100 NSW university students are taking to the skies in brand new state-of-the-art training aircraft.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Another way of thinking about the national assessment of people, culture, and environment

    Another way of thinking about the national assessment of people, culture, and environment

    There is a multi-directional relationship between research culture and research assessment.

    Poor research assessment can lead to poor research cultures. The Wellcome Trust survey in 2020 made this very clear.

    Assessing the wrong things (such as a narrow focus on publication indicators), or the right things in the wrong way (such as societal impact rankings based on bibliometrics) is having a catalogue of negative effects on the scholarly enterprise.

    Assessing the assessment

    In a similar way, too much research assessment can also lead to poor research cultures. Researchers are one of the most heavily assessed professions in the world. They are assessed for promotion, recruitment, probation, appraisal, tenure, grant proposals, fellowships, and output peer review. Their lives and work are constantly under scrutiny, creating competitive and high-stress environments.

    But there is also a logic (Campbell’s Law) that tells us that if we assess research culture it can lead to greater investment into improving it. And it is this logic that the UK Joint HE funding bodies have drawn on in their drive to increase the weighting given to the assessment of People, Culture & Environment in REF 2029. This makes perfect sense: given the evidence that positive and healthy research cultures are a thriving element of Research Excellence, it would be remiss of any Research Excellence Framework not to attempt to assess, and therefore incentivise them.

    The challenge we have comes back to my first two points. Even assessing the right things, but in the wrong way, can be counterproductive, as may increasing the volume of assessment. Given research culture is such a multi-faceted concept, the worry is that the assessment job will become so huge that it quickly becomes burdensome, thus having a negative impact on those research cultures we want to improve.

    It ain’t what you do, it’s the way that you do it

    Just as research culture is not so much about the research that you do but the way that you do it, so research culture assessment should concern itself not so much with the outcomes of that assessment but with the way the assessment takes place.

    This is really important to get right.

    I’ve argued before that research culture is a hygiene factor. Most dimensions of culture relate to standards that it’s critically important we all get right: enabling open research, dealing with misconduct, building community, supporting collaboration, and giving researchers the time to actually do research. These aren’t things for which we should offer gold stars but basic thresholds we all should meet. And to my mind they should be assessed as such.

    Indeed this is exactly how the REF assessed open research in 2021 (and will do so again in 2029). They set an expectation that 95 per cent of qualifying outputs should be open access, and if you failed to hit the threshold, excess closed outputs were simply unclassified. End of. There were no GPAs for open access.

    In the tender for the PCE indicator project, the nature of research culture as a hygiene factor was recognised by proposing “barrier to entry” measures. The expectation seemed to be that for some research culture elements institutions would be expected to meet a certain threshold, and if they failed they would be ineligible to even submit to REF.

    Better use of codes of practice

    This proposal did not make it into the current PCE assessment pilot. However, the REF already has a “barrier to entry” mechanism, of course, which is the completion of an acceptable REF Code of Practice (CoP).

    An institution’s REF CoP is about how they propose to deliver their REF, not how they deliver their research (although there are obvious crossovers). And REF have distinguished between the two in their latest CoP Policy module governing the writing of these codes.

    But given that REF Codes of Practice are now supposed to be ongoing, living documents, I don’t see why they shouldn’t take the form of more research-focussed (rather than REF-focussed) codes. It certainly wouldn’t harm research culture if all research performing organisations had a thorough research code of practice (most do of course) and one that covers a uniform range of topics that we all agree are critical to good research culture. This could be a step beyond the current Terms & Conditions associated with QR funding in England. And it would be a means of incentivising positive research cultures without ‘grading’ them. With your REF CoP, it’s pass or fail. And if you don’t pass first time, you get another attempt.

    Enhanced use of culture and environment data

    The other way of assessing culture to incentivise behaviours without it leading to any particular rating or ranking is to simply start collecting & surfacing data on things we care about. For example, the requirement to share gender pay gap data and to report misconduct cases, has focussed institutional minds on those things without there being any associated assessment mechanism. If you check out the Higher Education Statistics Agency (HESA) data on proportion of male:female professors, in most UK institutions you can see the ratio heading in the right direction year on year. This is the power of sharing data, even when there’s no gold or glory on offer for doing so.

    And of course, the REF already has a mechanism to share data to inform, but not directly make an assessment, in the form of ’Environment Data’. In REF 2021, Section 4 of an institution’s submission was essentially completed for them by the REF team by extracting from the HESA data, the number of doctoral degrees awarded (4a) and the volume of research income (4b); and from the Research Councils, the volume of research income in kind (4c).

    This data was provided to add context to environment assessments, but not to replace them. And it would seem entirely sensible to me that we identify a range of additional data – such as the gender & ethnicity of research-performing staff groups at various grades – to better contextualise the assessment of PCE, and to get matters other than the volume of research funding up the agendas of senior university committees.

    Context-sensitive research culture assessment

    That is not to say that Codes of Practice and data sharing should be the only means of incentivising research culture of course. Culture was a significant element of REF Environment statements in 2021, and we shouldn’t row back on it now. Indeed, given that healthy research cultures are an integral part of research excellence, it would be remiss not to allocate some credit to those who do this well.

    Of course there are significant challenges to making such assessments robust and fair in the current climate. The first of these is the complex nature of research culture – and the fact that no framework is going to cover every aspect that might matter to individual institutions. Placing boundaries around what counts as research culture could mean institutions cease working on agendas that are important to them, because they ostensibly don’t matter to REF.

    The second challenge is the severe and uncertain financial constraints currently faced by the majority of UK HEIs. Making the case for a happy and collaborative workforce when half are facing redundancy is a tough ask. A related issue here is the hugely varying levels of research (culture) capital across the sector as I’ve argued before. Those in receipt of a £1 million ‘Enhancing Research Culture’ fund from Research England, are likely to make a much better showing than those doing research culture on a shoe-string.

    The third is that we are already half-way through this assessment period and we’re only expected to get the final guidance in 2026 – two years prior to submission. And given the financial challenges outlined above, this is going to make this new element of our submission especially difficult. It was partly for this reason that some early work to consider the assessment of research culture was clear that this should celebrate the ‘journey travelled’, rather than a ‘destination achieved’.

    For this reason, to my mind, the only thing we can reasonably expect all HEIs to do right now with regards to research culture is to:

    • Identify the strengths and challenges inherent within your existing research culture;
    • Develop a strategy and action plan(s) by which to celebrate those strengths and address those challenges;
    • Agree a set of measures by which to monitor your progress against your research culture ambitions. These could be inspired by some of the suggestions resulting from the Vitae & Technopolis PCE workshops & Pilot exercise;
    • Describe your progress against those ambitions and measures. This could be demonstrated both qualitatively and quantitatively, through data and narratives.

    Once again, there is an existing REF assessment mechanism open to us here, and that is the use of the case study. We assess research impact by effectively asking HEIs to tell us their best stories – I don’t see why we shouldn’t make the same ask of PCE, at least for this REF.

    Stepping stone REF

    The UK joint funding bodies have made a bold and sector-leading move to focus research performing organisations’ attention on the people and cultures that make for world-leading research endeavours through the mechanism of assessment. Given the challenges we face as a society, ensuring we attract, train, and retain high quality research talent is critical to our success. However, the assessment of research culture has the power both to make things better or worse: to incentivise positive research cultures or to increase burdensome and competitive cultures that don’t tackle all the issues that really matter to institutions.

    To my mind, given the broad range of topics that are being worked on by institutions in the name of improving research culture, and where we are in the REF cycle, and the financial constraints facing the sector, we might benefit from a shift in the mechanisms proposed to assess research culture in 2029 and to see this as a stepping stone REF.

    Making better use of existing mechanisms such as a Codes of Practice and Environment and Culture data would assess the “hygiene factor” elements of culture without unhelpfully associating any star ratings to them. Ratings should be better applied to the efforts taken by institutions to understand, plan, monitor, and demonstrate progress against their own, mission-driven research culture ambitions. This is where the real work is and where real differentiations between institutions can be made, when contextually assessed. Then, in 2036, when we can hope that the sector will be in a financially more stable place, and with ten years of research culture improvement time behind us, we can assess institutions against their own ambitions, as to whether they are starting to move the dial on this important work.

    Source link

  • From Small-Town Roots to National Honor: SC Native Receives State’s Highest Award

    From Small-Town Roots to National Honor: SC Native Receives State’s Highest Award

    From the small town of Lyman, South Carolina, Dr. James L. Moore’s journey to success is one he attributes to the steadfast support of his mother and the historical Dr. James L. Moore IIItrailblazers whose influence shaped his path to distinction.

    On Saturday, Jan. 25, Moore—a Distinguished Professor of Urban Education at The Ohio State University (OSU) and executive director of the Todd Anthony Bell National Resource Center—was awarded the Order of the Palmetto—South Carolina’s highest civilian honor established in 1971. The prestigious award is presented by the governor to individuals who have demonstrated extraordinary lifetime achievement, service, and contributions of national or statewide significance.

    “To be honored and to receive the highest honor to a civilian of South Carolina is so humbling,” said Moore in an interview with Diverse. “Service to humanity is the hallmark of philosophy, and in many ways, it shaped who I am and what I’m about in my day to day. All that I am and that I hope to be, has been shaped by my experience growing up in South Carolina.”

    Moore follows in the footsteps of other legendary leaders from South Carolina who’ve received the honor, many of whom broke down barriers throughout history, paving the way for him and others to succeed. Moore said that it’s not lost on him that he’s in the tradition of a long line of South Carolina humanitarians.

    “The state has a complex history, some of which is painful to reflect on, but it is where my family, some of whom arrived as enslaved Africans, created community from the most difficult of circumstances,” he said. “They built opportunities for people like me. South Carolina is special to me, not only for its rich and sometimes painful history, but because 10% to 15% of all Black Americans can trace their roots here.” 

    The state, he said, has produced a legacy of excellence, from singer James Brown and tennis great Althea Gibson to educator Mary McLeod Bethune. 

    “I just want to make sure that I forever acknowledge and recognize the contributions and the giants that I stand on their shoulders,” said Moore, who pointed to the late Dr. Benjamin Elijah Mays—the former president of Morehouse College—as a model for educational and humanitarian excellence.

    A nationally recognized education expert and leader, Moore has had a distinguished career in higher education and has been applauded for his work promoting educational excellence and access for all. Throughout his fabulous career, he has won numerous international and national accolades. 

    His research spans multiple disciplines, including school counseling, urban education, and STEM education. He has co-authored seven books and more than 160 publications, secured nearly $40 million in funding, and delivered more than 200 scholarly presentations globally. Moore’s contributions to education have earned him recognition, including being named one of Education Week’s 200 most influential scholars in the U.S. since 2018.

    Dr. Jerlando F.L. Jackson, Dean of the College of Education and Foundation Professor of Education at Michigan State University, praised Moore’s impact, citing the ripple effect his leadership has created within the American education system.

    “Dr. Moore’s influence extends far beyond his own accomplishments,” said Jackson, who has known Moore since their days as graduate students and have collaborated with him on a number of initiatives and projects, including the International Colloquium on Black Males in Education. “Through his leadership, he is empowering educators, policymakers, and community leaders to reimagine what is possible with South Carolina in mind,” Jackson said.  

    Moore’s focus on education access, preparation, innovation, and opportunities “has not only improved outcomes for today’s students but has also laid the foundation for a brighter future for generations to come,” Jackson added. “He is the kind of leader who sees potential in everyone, and he works tirelessly to help others realize their dreams, regardless of their backgrounds. Whether mentoring a young scholar or speaking at a community event, Dr. Moore connects with people in ways that are deeply inspiring and transformative.”

    Moore’s work has focused on closing opportunity gaps, increasing access to quality education, and addressing disparities that disproportionately affect educational vulnerable student populations. Through his research and leadership, Jackson said that Moore has not only informed policy, but also directly influenced educational practices that all have benefited from, including South Carolina.

    Dr. Eric Tucker, President & CEO of The Study Group, agrees.

    “His tireless dedication to inclusive excellence proves that one visionary can unite and uplift entire communities, sparking transformative educational change at the secondary and postsecondary levels,” said Tucker, who lauded Moore’s efforts to help undergraduate scholars secure prestigious fellowships, including the Rhodes and Truman Scholarships. As executive director of the Todd Anthony Bell National Resource Center on the African American Male, he reimagined OSU’s Early Arrival Program, offering mentorship and leadership opportunities to support young Black men and boys in their pursuit of higher education.

    “From a small-town upbringing to a national and international stage, Dr. Moore has used his expertise to bring fresh opportunities and shape educational transformation across the United States and other parts of the globe,” said Tucker. “His leadership and forward-thinking approaches demonstrate how determination can unite communities and open new doors for students in all zip codes, regions, and jurisdictions,” he added.

    And no matter how many times you ask Moore about his own influences and success, he never forgets his family and the village who raised him. As one of three siblings, he remembers his late mother Edna, whose sacrifices and love shaped her children’s lives in South Carolina.

    “My mother did everything for her three kids, and my mother was an inspiration to not only me, but for those who knew her,” Moore said. “And even though she’s not here with me, she lives inside me, and she always told me that ‘family lives inside of you, and everywhere you go, son, take family with you,’ So I can hear her. She was the best coach I ever had. This is for her,” he said.

    Source link

  • National Advisory Committee on Institutional Quality and Integrity Meets February 19-20. (US Department of Education)

    National Advisory Committee on Institutional Quality and Integrity Meets February 19-20. (US Department of Education)

     

    Education Department

    Hearings, Meetings, Proceedings, etc.:

    National Advisory Committee on Institutional Quality and Integrity

    FR Document: 2025-01459
    Citation: 90 FR 7677 PDF Pages 7677-7679 (3 pages)
    Permalink
    Abstract: This notice sets forth the agenda, time, and instructions to access or participate in the February 19-20, 2025 meeting of NACIQI, and provides information to members of the public regarding the meeting, including requesting to make written or oral comments. Committee members will meet in-person while accrediting agency representatives and public attendees will participate virtually.

    Source link

  • Q&A with retiring National Student Clearinghouse CEO

    Q&A with retiring National Student Clearinghouse CEO

    Ricardo Torres, the CEO of the National Student Clearinghouse, is retiring next month after 17 years at the helm. His last few weeks on the job have not been quiet.

    On Jan. 13, the clearinghouse’s research team announced they had found a significant error in their October enrollment report: Instead of freshman enrollment falling by 5 percent, it actually seemed to have increased; the clearinghouse is releasing its more complete enrollment report tomorrow. In the meantime, researchers, college officials and policymakers are re-evaluating their understanding of how 2024’s marquee events, like the bungled FAFSA rollout, influenced enrollment; some are questioning their reliance on clearinghouse research.

    It’s come as a difficult setback at the end of Torres’s tenure. He established the research center in 2010, two years after becoming CEO, and helped guide it to prominence as one of the most widely used and trusted sources of postsecondary student data.

    The clearinghouse only began releasing the preliminary enrollment report, called the “Stay Informed” report, in 2020 as a kind of “emergency measure” to gauge the pandemic’s impact on enrollment, Torres told Inside Higher Ed. The methodological error in October’s report, which the research team discovered this month, had been present in every iteration since. And a spokesperson for the clearinghouse said that after reviewing the methodology for their “Transfer and Progress” report, which they’ve released every February since 2023, was also affected by the miscounting error; the 2025 report will be corrected, but the last two were skewed.

    Torres said the clearinghouse is exploring discontinuing the “Stay Informed” report entirely.

    Such a consequential snafu would put a damper on anyone’s retirement and threaten to tarnish their legacy. But Torres is used to a little turbulence: He oversaw the clearinghouse through a crucial period of transformation, from an arm of the student lending sector to a research powerhouse. He said the pressure on higher ed researchers is only going to get more intense in the years ahead, given the surging demand for enrollment and outcomes data from anxious college leaders and ambitious lawmakers. Transparency and integrity, he cautioned, will be paramount.

    His conversation with Inside Higher Ed, edited for length and clarity, is below.

    Q: You’ve led the clearinghouse since 2008, when higher ed was a very different sector. How does it feel to be leaving?

    A: It’s a bit bittersweet, but I feel like we’ve accomplished something during my tenure that can be built upon. I came into the job not really knowing about higher ed; it was a small company, a $13 million operation serving the student lending industry. We were designed to support their fundamental need to understand who’s enrolled and who isn’t, for the purposes of monitoring student loans. As a matter of fact, the original name of the organization was the National Student Loan Clearinghouse. When you think about what happened when things began to evolve and opportunities began to present themselves, we’ve done a lot.

    Q: Tell me more about how the organization has changed since the days of the Student Loan Clearinghouse.

    A: Frankly, the role and purpose of the clearinghouse and its main activities have not changed in about 15 years. The need was to have a trusted, centralized location where schools could send their information that then could be used to validate loan status based on enrollments. The process, prior to the clearinghouse, was loaded with paperwork. The registrars that are out there now get this almost PTSD effect when they go back in time before the clearinghouse. If a student was enrolled in School A, transferred to School B and had a loan, by the time everybody figured out that you were enrolled someplace else, you were in default on your loan. We were set up to fix that problem.

    What made our database unique at that time was that when a school sent us enrollment data, they had to send all of the learners because they actually didn’t know who had a previous loan and who didn’t. That allowed us to build a holistic, comprehensive view of the whole lending environment. So we began experimenting with what else we could do with the data.

    Our first observation was how great a need there was for this data. Policy formulation at almost every level—federal, state, regional—for improving learner outcomes lacked the real-time data to figure out what was going on. Still, democratizing the data alone was insufficient because you need to convert that insight into action of some kind that is meaningful. What I found as I was meeting schools and individuals was that the ability and the skill sets required to convert data to action were mostly available in the wealthiest institutions. They had all the analysts in the world to figure out what the hell was going on, and the small publics were just scraping by. That was the second observation, the inequity.

    The third came around 2009 to 2012, when there was an extensive effort to make data an important part of decision-making across the country. The side effect of that, though, was that not all the data sets were created equal, which made answering questions about what works and what doesn’t that much more difficult.

    The fourth observation, and I think it’s still very relevant today, is that the majority of our postsecondary constituencies are struggling to work with the increasing demands they’re getting from regulators: from the feds, from the states, from their accreditors, the demand for reports is increasing. The demand for feedback is increasing. Your big institutions, your flagships, might see this as a pain in the neck, but I would suggest that your smaller publics and smaller private schools are asking, “Oh my gosh, how are we even going to do this?” Our data helps.

    Q: What was the clearinghouse doing differently in terms of data collection?

    A: From the postsecondary standpoint, our first set of reports that we released in 2011 focused on two types of learners that at most were anecdotally referred to: transfer students and part-time students. The fact that we included part-time students, which [the Integrated Postsecondary Education Data System] did not, was a huge change. And our first completion report, I believe, said that over 50 percent of baccalaureate recipients had some community college in their background. That was eye-popping for the country to see and really catalyzed a lot of thinking about transfer pathways.

    We also helped spur the rise of these third-party academic-oriented organizations like Lumina and enabled them to help learners by using our data. One of our obligations as a data aggregator was to find ways to make this data useful for the field, and I think we accomplished that. Now, of course, demand is rising with artificial intelligence; people want to do more. We understand that, but we also think we have a huge responsibility as a data custodian to do that responsibly. People who work with us realize how seriously we take that custodial relationship with the data. That has been one of the hallmarks of our tenure as an organization.

    Q: Speaking of custodial responsibility, people are questioning the clearinghouse’s research credibility after last week’s revelation of the data error in your preliminary enrollment report. Are you worried it will undo the years of trust building you just described? How do you take accountability?

    A: No. 1: The data itself, which we receive from institutions, is reliable, current and accurate. We make best efforts to ensure that it accurately represents what the institutions have within their own systems before any data is merged into the clearinghouse data system.

    When we first formed the Research Center, we had to show how you can get from the IPEDS number to the clearinghouse number and show people our data was something they could count on. We spent 15 years building this reputation. The key to any research-related error like this is, first, you have to take ownership of it and hold yourself accountable. As soon as I found out about this we were already making moves to [make it public]—we’re talking 48 hours. That’s the first step in maintaining trust.

    That being said, there’s an element of risk built into this work. Part of what the clearinghouse brings to the table is the ability to responsibly advance the dialogue of what’s happening in education and student pathways. There are things that are happening out there, such as students stopping out and coming back many years later, that basically defy conventional wisdom. And so the risk in all of this is that you shy away from that work and decide to stick with the knitting. But your obligation is, if you’re going to report those things, to be very transparent. As long as we can thread that needle, I think the clearinghouse will play an important role in helping to advance the dialogue.

    We’re taking this very seriously and understand the importance of the integrity of our reports considering how the field is dependent on the information we provide. Frankly, one of the things we’re going to take a look at is, what is the need for the preliminary report at the end of the day? Or do we need to pair it with more analysis—is it just enough to say that total enrollments are up X or down Y?

    Q: Are you saying you may discontinue the preliminary report entirely?

    A: That’s certainly an option. I think we need to assess the field’s need for an early report—what questions are we trying to answer and why is it important that those questions be answered by a certain time? I’ll be honest; this is the first time something like this has happened, where it’s been that dramatic. That’s where the introspection starts, saying, “Well, this was working before; what the heck happened?”

    When we released the first [preliminary enrollment] report [in 2020], we thought it’d be a one-time thing. Now, we’ve issued other reports that we thought were going to be one-time and ended up being a really big deal, like “Some College, No Credential.” We’re going to continue to look for opportunities to provide those types of insights. But I think any research entity needs to take a look at what you’re producing to make sure there’s still a need or a demand, or maybe what you’re providing needs to pivot slightly. That’s a process that’s going to be undertaken over the next few months as we evaluate this report and other reports we do.

    Q: How did this happen, exactly? Have you found the source of the imputation error?

    A: The research team is looking into it. In order to ensure for this particular report that we don’t extrapolate this to a whole bunch of other things, you just need to make sure that you know you’ve got your bases covered analytically.

    There was an error in how we imputed a particular category of dual-enrolled students versus freshmen. But if you look at the report, the total number of learners wasn’t impacted by that. These preliminary reports were designed to meet a need after COVID, to understand what the impact was going to be. We basically designed a report on an emergency basis, and by default, when you don’t have complete information, there’s imputation. There’s been a lot of pressure on getting the preliminary fall report out. That being said, you learn your lesson—you gotta own it and then you keep going. This was very unfortunate, and you can imagine the amount of soul searching to ensure that this never happens again.

    Q: Do you think demand for more postsecondary data is driving some irresponsible analytic practices?

    A: I can tell you that new types of demands are going to be put out there on student success data, looking at nondegree credentials, looking at microcredentials. And there’s going to be a lot of spitballing. Just look at how ROI is trying to be calculated right now; I could talk for hours about the ins and outs of ROI methodology. For example, if a graduate makes $80,000 after graduating but transferred first from a community college, what kind of attribution does the community college get for that salary outcome versus the four-year school? Hell, it could be due to a third-party boot camp done after earning a degree. Research on these topics is going to be full of outstanding questions.

    Q: What comes next for the clearinghouse’s research after you leave?

    A: I’m excited about where it’s going. I’m very excited about how artificial intelligence can be appropriately leveraged, though I think we’re still trying to figure out how to do that. I can only hope that the clearinghouse will continue its journey of support. Because while we don’t directly impact learner trajectories, we can create the tools that help people who support learners every year impact those trajectories. Looking back on my time here, that’s what I’m most proud of.

    Source link

  • How five colleges recognize the National Day of Racial Healing

    How five colleges recognize the National Day of Racial Healing

    Racial healing circles, or opportunities for community members to share stories and connect on a human level, are common activities for the National Day of Racial Healing. This year is the ninth observance of the holiday.

    AJ Watt/E+/Getty Images 

    Over the past two decades, higher education has grown exceptionally diverse, enrolling students from all backgrounds and offering opportunities for education and career development for historically underserved populations.

    This diversification of the students, staff and faculty who make up higher education also offers opportunities for institutions to promote justice and racial healing through intentional education and programming. One annual marker of this work is the National Day of Racial Healing.

    The background: The National Day of Racial Healing was established by the W. K. Kellogg Foundation in 2017 as part of the Truth, Racial Healing and Transformation (TRHT) initiative to bring people together and inspire action to build a more just and equitable world.

    The day falls on the Tuesday after Martin Luther King Jr. Day and is marked by events and activities that promote racial healing. Racial healing, as defined by the foundation, is “the experience shared by people when they speak openly and hear the truth about past wrongs and the negative impacts created by individual and systemic racism,” according to the effort’s website.

    On campus: The American Association of Colleges and Universities encourages institutions to “engage in activities, events or strategies to promote healing and foster engagement around the issues of racism, bias, inequity and injustice in our society,” according to a Dec. 18 press release. AAC&U partners with 72 institutions to establish TRHT Campus Centers, with the goal of developing 150 self-sustaining community-integrated centers.

    Some ways institutions can do this is through organizing activities, inviting faculty to connect course material to racial healing during that week, coordinating events or sharing stories on social media, according to AAC&U.

    Here’s how colleges and universities, many that host TRHT Campus Centers, plan to honor the National Day of Racial Healing.

    • Baldwin Wallace University in Ohio will host two Jacket Circles for students to participate in storytelling and deep listening to build empathy and compassion. The University of Louisville, similarly, will host Cardinal Connection Circles.
    • Emory University in Georgia will hold a three-day event, beginning on Jan. 21, that includes a keynote, lunch-and-learn panel discussion, racial healing circles, and a dinner experience.
    • Binghamton University, part of the State University of New York system, will host its first National Day of Racial Healing this year, which includes healing circles, roundtable discussions and art-based initiatives.
    • The TRHT Center at Northern Virginia Community College will partner with the Fairfax County Board of Supervisors to issue a formal proclamation in a public forum, acknowledging the importance of the day, a tradition for the two groups.
    • The University of Hawai‘i at Mānoa will take a pause today to recognize the overthrow of the Hawaiian kingdom, as well as the legacy of Martin Luther King Jr. and the National Day of Racial Healing. The event, Hawai‘i ku‘u home aloha, which “Hawai‘i my beloved home,” honors the past, present and future of the islands.

    Get more content like this directly to your inbox every weekday morning. Subscribe here.

    Source link

  • The value of having a National Learning Framework incorporating school, college and higher education

    The value of having a National Learning Framework incorporating school, college and higher education

    By Michelle Morgan, Dean of Students at the University of East London.

    In the UK, we have a well-established education system across different levels of learning including primary, secondary, further and higher education. For each level, there is a comprehensive structure that is regulated and monitored alongside extensive information. However, at present, they generally function in isolation. 

    The Government’s recent Curriculum and Assessment Review has asked for suggestions to improve the curriculum and assessment system for the 16-19 year study group. This group includes a range of qualifications including GCSEs, A-levels, BTECs, T Levels and apprenticeships. The main purpose of the Review is to

    ensure that the curriculum balances ambition, relevance, flexibility and inclusivity for all children and young people.

    However, as part of this review, could it also look at how the different levels of study build on one another? Could the sectors come together and use their extensive knowledge for their level and type of study, to create an integrated road map across secondary, further and higher education where skills, knowledge, competencies and attributes (and how they translate into employability skills) are clearly articulated? We could call this a National Learning Framework. It could align with the learning gain programme led by the Office for Students (OfS).

    The benefits of a National Learning Framework

    There would be a number of benefits to adopting this approach:

    • It would provide a clear resource for all stakeholders, including students and staff in educational organisations, policymakers, Government bodies, Regulators and Quality Standard bodies (such as Ofsted, the Office for Students and QAA) and business and industry. It would also help manage the general public perception of higher education. 
    • This approach would join up the regulatory bodies responsible for the different sectors. It would help create a collaborative, consistent learning and teaching approach, by setting and explaining the aims and objectives of the various types of education providers.
    • It would explain and articulate the differences in learning, teaching and assessment approaches across the array of secondary and further education qualifications that are available and used as progression qualifications into higher education.  For example, A-Levels are mainly taught in schools and assessed by end-of-year exams. ‘Other’ qualifications such as BTEC, Access and Other Level 3 qualifications taught in college have more diverse assessments.
    • It would help universities more effectively bridge the learning and experience transition into higher education across all entry qualifications.  We know students from the ‘Other’ qualification groups are often from disadvantaged backgrounds, which can affect retention, progression and success at university as research highlights (see also this NEON report).  Students with other qualifications are more likely to withdraw than those with A-Levels. However, as this recent report Prior learning experience, study expectations of A-Level and BTEC students on entry to university highlights, it is not the BTEC qualification per se that is the problem but the transition support into university study that needs improvement.
    • It would also address assumptions about how learning occurs at each level of study. For example, because young people use media technology to live and socialise, it is assumed the same is the case with learning. Accessing teaching and learning material, especially in schools, remains largely traditional: the main sources of information are course textbooks and handwritten notes, although since the Covid-19 Pandemic, the use of coursework submission and basic virtual learning environments (VLEs) is on the increase.
    • If we clearly communicate to students the learning that occurs throughout each level of their study, and what skills, knowledge, competencies and attributes they should obtain as a result, this can help with their confidence levels and their employability opportunities as they can better articulate what they have achieved.

    What could an integrated learning approach across all levels of study via a National Learning  Framework look like?

    The  Employability Skills Pyramid created for levels 4 to 7 in higher education with colleagues in a previous university where I worked could be extended to include Levels 2/3 and apprenticeships to create a National Learning Framework. The language used to construct the knowledge, skills and attribute grids used by course leaders purposely integrated the QAA statements for degrees (see accompanying document Appendix 1) .

    By adding Levels 2 and 3, including apprenticeship qualifications and articulating the differences between each qualification, the education sector could understand what is achieved within and between different levels of study and qualifications (see Figure 1).

    Key stakeholders could come together from across all levels of study to map out and agree on the language to adopt for consistency across the various levels and qualifications.

    Integrated National Learning Framework across Secondary, Further and Higher Education

    Alongside the National Learning Framework, a common transition approach drawing on the same definitions across all levels of study would be valuable. Students and staff could gain the understanding required to foster successful transitions between phases.  An example is provided below.

    Supporting transitions across the National Learning Framework using similar terminology

    The Student Experience Transitions (SET) Model was designed to support courses of various lengths and make the different stages of a course clearer. It was originally designed for higher education but the principles are the same across all levels of study (see Figure 2). Students need to progress through each stage which has general rules of engagement. The definitions of each stage and the mapping of each stage by length of course are in the accompanying document in Appendix 2.

    Figure 2: The Student Experience Transitions Model. Source: Morgan 2012

    The benefits for students are consistency and understanding what is expected for their course. At each key transition stage, students would understand what is expected by reflecting on what they have previously learnt, how the coming year builds on what they already know and what they will achieve at the end.

    Taking the opportunity to integrate

    The Curriculum Review provides a real opportunity to join up each level of study and provide clarity for all stakeholders. Importantly, a National Learning Framework could provide and help with the Government’s aims of balancing ambition, relevance, flexibility and inclusivity for all learners regardless of level of study.

    Appendices

    Source link