Author: admin

  • Best Project Guide for uttaranchal university projects

    Best Project Guide for uttaranchal university projects

    MBA Project Guide provides complete project guidance and MBA Project Reports for Logistics & Supply Chain Management, Business Analytics, Digital Marketing, Banking, and Finance. This includes assistance with the project synopsis, internship reports, viva support, and guidance until the project is accepted in accordance with the university guidelines.

    Source link

  • Connect more: creating the conditions for a more resilient and sustainable HE sector in England

    Connect more: creating the conditions for a more resilient and sustainable HE sector in England

    Despite it being the season of cheer, higher education in England isn’t facing the merriest of Christmases.

    Notwithstanding the recent inflationary uplift to the undergraduate fee cap, the financial headwinds in higher education remain extremely challenging. Somehow, in the spring/summer of next year, the Secretary of State for Education is going to have to set out not only what the government expects from the sector in terms of meeting the core priority areas of access, quality and contribution to economic growth, but how it will deliver on its promise to put the sector on a long-term sustainable financial footing.

    The overall structure of the sector in terms of the total number of providers of higher education and their relationships to each other might arguably be considered a second-order question, subject to the specifics of the government’s plans. But thinking that way would be a mistake.

    The cusp of change

    There are real and present concerns right now about the short term financial stability of a number of providers, with the continued increased risk that a provider exits the market in an unplanned way through liquidation, making the continued absence of a regime for administering distressed providers ever more stark.

    But on a larger scale, if, as some believe, the sector is on the cusp of entering into a new phase of higher education, a much more connected and networked system, tied more closely into regional development agendas, and more oriented to the collective public value that higher education creates, then the thinking needs to start now about how to enable providers to take part in the strategic discussions and scenario plans that can help them to imagine that kind of future, and develop the skills to operate in the new ways that a different HE landscape could require. It is these discussions that need to inform the development of the HE strategy.

    The Office for Students (OfS) has signalled that it considers more structural collaboration to be likely as a response to financial challenge:

    Where necessary, providers will need to prepare for, and deliver in practice, the transformation needed to address the challenges they face. In some cases, this is likely to include looking externally for solutions to secure their financial future, including working with other organisations to reduce costs or identifying potential merger partners or other structural changes.

    Financial challenge may be the backdrop to some of this thinking; it should not be the sole rationale. Looking ahead, the sector would be planning change even if it were in good financial health: preparing for demographic shifts and the challenge of lifelong learning, the rise of AI, and the volatile context for international education and research. Strategic collaboration is rarely an end in itself – it’s nice to work together but ultimately there has to be a clear strategic rationale that two or more providers can realise greater value and hedge more readily against future risks, than each working individually.

    There’s no roadmap

    In the autumn of 2024, Wonkhe and Mills & Reeve convened a number of private and confidential conversations with heads of institution, stakeholders from the sector’s representative bodies, mission groups, and regional networks, Board chairs, and a lender to the sector. We wanted to test the sector’s appetite for structural change; in the first instance assessing providers’ appetite for stepping in to support another provider struggling, but also attitudes to merger and other forms of strategic collaboration short of full merger. Our report, Connect more: creating the conditions for a more resilient and sustainable higher education system in England sets out our full findings and recommendations.

    There is a startling dearth of law and policy around structural collaboration for HE; some issues such as the VAT rules on shared services, are well established, while others are more speculative. What would the regulatory approach be to a “federated” group of HE providers? What are merging providers’ legal responsibilities to students? What data and evidence might providers draw on to inform their planning?

    We found a very similar set of concerns, whether we were discussing a scenario in which a provider is approached by DfE or OfS to acquire another distressed provider, or the wider strategic possibilities afforded by structural collaboration.

    All felt strongly that the driving rationale behind any such structural change – which takes considerable time and effort to achieve – should be strategic, rather than purely financial. Heads of institution could readily imagine the possibilities for widening access to HE, protecting at-risk subjects; boosting research opportunities, and generally realising value through the pooling of expertise, infrastructure and procurement power. The regional devolution and regional economic growth agendas were widely considered to be valued enablers for realising the opportunities for a more networked approach.

    But the hurdles to overcome are also significant. Interviewees gave examples of failed collaboration attempts in other sectors and the negative cultural perceptions attached to measures like mergers. There was a nervousness about competition law and more specifically OfS’ attitude to structural change, the implications for key institutional performance metrics, and a general sense that no quarter would be given in accommodating a period of adjustment following significant structural change. The risks involved were very obvious and immediate, while the benefits were more speculative and would take time to realise.

    Creating conditions

    We have arrived at two broad conclusions: the first being that government and OfS, in tandem with other interested parties such as the Competition and Markets Authority could adopt a number of measures to reduce the risks for providers entering into discussions about strategic collaboration.

    This would not involve steering particular providers or taking a formal view about what forms of collaboration will best serve public policy ends, but would signal a broadly supportive and facilitative attitude on the part of government and the regulator. As one head of institution observed, a positive agenda around the sector’s collaborative activity would be much more galvanising than the continued focus on financial distress.

    The second is that institutions themselves may need to consider their approach to these challenges and think through whether they have the right mix of skills and knowledge within the executive team and on the Board to do scenario planning and strategic thinking around structural change.

    In the last decade, the goal for Boards has been all about making their institution stronger, and more competitive. While that core purpose hasn’t gone away, it could be time to temper it with a closer attention to the ways that working in a more collective way could help higher education prepare itself for whatever the future throws at it.

     

    This article is published in association with Mills & Reeve. View and download Connect more: creating the conditions for a more resilient and sustainable higher education system in England here.

    Source link

  • Higher education in England needs a special administration regime

    Higher education in England needs a special administration regime

    Extra government funding for the higher education sector in England means the debate about the prospect of an HE provider facing insolvency and a special administration regime has gone away, right?

    Unfortunately not. There is no additional government funding; in fact the additional financial support facilitated by the new Labour government so far is an increase to tuition fees for the next academic year for those students that universities can apply this to. It is estimated that the tuition cost per student is in excess of £14K per year, so the funding gap has not been closed. Add in increased National Insurance contributions and many HE providers will find themselves back where they are right now.

    It is a problem that there is no viable insolvency process for universities. But a special administration regime is not solely about “universities going bust.” In fact, such a regime, based on the existing FE special administration legislation, is much more about providing legal clarity for providers, stakeholders and students, than it is about an insolvency process for universities.

    Managing insolvency and market exit

    The vast majority of HE providers are not companies. This means that there is a lack of clarity as to whether current Companies and Insolvency legislation applies to those providers. For providers, that means that they cannot avail themselves of many insolvency processes that companies can, namely administration, company voluntary arrangements and voluntary liquidation. It is debatable whether they can propose a restructuring plan or be wound up by the court, but a fixed charge holder can appoint receivers over assets.

    Of these processes, the one most likely to assist a provider is administration, as it allows insolvency practitioners to trade an entity to maximise recoveries from creditors, usually through a business and asset sale.

    At best therefore, an HE provider might be able to be wound up by the court or have receivers appointed over its buildings. Neither of these two processes allows continued trading. Unlike administration, neither of these processes provides moratorium protection against creditor enforcement either. They are not therefore conducive to a distressed merger, teach out or transfer of students on an orderly basis.

    Whilst it is unlikely that special administration would enable survival of an institution, due to adverse PR in the market, it would provide a structure for a more orderly market exit, that does not currently exist for most providers.

    Protections for lenders

    In addition to there being no viable insolvency process for the majority of HE providers, there is also no viable enforcement route for secured lenders. That is a bad thing because if secured lenders have no route to recovering their money, then they are not going to be incentivised to lend more into the sector.

    If government funding is insufficient to plug funding gaps, providers will need alternative sources of finance. The most logical starting point is to ask their existing lenders. Yes, giving lenders more enforcement rights could lead to more enforcements, but those high street lenders in the sector are broadly supportive of the sector, and giving lenders the right to do something is empowering and does not necessarily mean that they will action this right.

    Lenders are not courting the negative press that would be generated by enforcing against a provider and most probably forcing a disorderly market exit. They are however looking for a clearer line to recovery, which, in turn, will hopefully result in a clearer line to funding for providers.

    Protections for students

    Students are obviously what HE providers are all about, but, if you are short of sleep and scour the Companies and Insolvency legislation, you will find no mention of them. If an HE provider gets into financial distress, then our advice is that the trustees should act in the best interest of all creditors. Students may well be creditors in respect of claims relating to potential termination of courses and/or having to move to another provider, potentially missing a year and waiting longer to enter the job market.

    However, the duty is to all creditors, not just some, and under the insolvency legislation, students have no better protection than any other creditor. Special administration would change that. The regime in the FE sector specifically provides for a predominant duty to act in the best interest of students and would enable the trustees to put students at the forefront of their minds in a time of financial distress.

    A special administration regime would therefore help trustees focus on the interest of students in a financially distressed situation, aligning them with the purposes of the OfS and charitable objects, where relevant.

    Protections for trustees

    Lastly, and probably most forcefully, a special administration regime would assist trustees of an HE provider in navigating a path for their institution in financial distress. As touched on above, it is not clear, for the vast majority of HE providers, whether the Companies and Insolvency legislation applies.

    It is possible that a university could be wound up by the court as an unregistered company. If it were, then the Companies and Insolvency legislation would apply. In those circumstances, the trustees could be personally liable if they fail to act in the best interest of creditors and/or do not have a reasonable belief that the HE provider could avoid an insolvency process.

    Joining a meeting of trustees to tell them that they could be personally liable, but it is not legally clear, is a very unsatisfactory experience; trust me, this is not a message they want to hear from their advisors.

    A special administration regime, applying the Companies and Insolvency legislation to all HE providers, regardless of their constitution or whether they are incorporated, would allow trustees to have a much clearer idea of the risks that they are taking and the approach that they should follow to protect stakeholders.

    In the event a special administration was to be brought in, we would hope it would not need to be applied to a market exit situation. Its real value, however, is in bringing greater legal clarity for lenders and trustees and more protection for students, in the current financial circumstances that HE providers find themselves in.

    Source link

  • The data dark ages | Wonkhe

    The data dark ages | Wonkhe

    Is there something going wrong with large surveys?

    We asked a bunch of people but they didn’t answer. That’s been the story of the Labour Force Survey (LFS) and the Annual Population Survey (APS) – two venerable fixtures in the Office for National Statistics (ONS) arsenal of data collections.

    Both have just lost their accreditation as official statistics. A statement from the Office for Statistical Regulation highlights just how much of the data we use to understand the world around us is at risk as a result: statistics about employment are affected by the LFS concerns, whereas APS covers everything from regional labour markets, to household income, to basic stuff about the population of the UK by nationality. These are huge, fundamental, sources of information on the way people work and live.

    The LFS response rate has historically been around 50 per cent, but it had fallen to 40 per cent by 2020 and is now below 20 per cent. The APS is an additional sample using the LFS approach – current advice suggests that response rates have deteriorated to the extent that it is no longer safe to use APS data at local authority level (the resolution it was designed to be used at).

    What’s going on?

    With so much of our understanding of social policy issues coming through survey data, problems like these feel almost existential in scope. Online survey tools have made it easier to design and conduct surveys – and often design in the kind of good survey development practices that used to be the domain of specialists. Theoretically, it should be easier to run good quality surveys than ever before – certainly we see more of them (we even run them ourselves).

    Is it simply a matter of survey fatigue? Or are people less likely to (less willing to?) give information to researchers for reasons of trust?

    In our world of higher education, we have recently seen the Graduate Outcomes response rate drop below 50 per cent for the first time, casting doubt as to its suitability as a regulatory measure. The survey still has accredited official statistics status, and there has been important work done on understanding the impact of non-response bias – but it is a concerning trend. The national student survey (NSS) is an outlier here – it has a 72 per cent response rate last time round (so you can be fairly confident in validity right down to course level), but it does enjoy an unusually good level of survey population awareness even despite the removal of a requirement for providers to promote the survey to students. And of course, many of the more egregious issues with HESA Student have been founded on student characteristics – the kind of thing gathered during enrollment or entry surveys.

    A survey of the literature

    There is a literature on survey response rates in published research. A meta-analysis by Wu et al (Computers in Human Behavior, 2022) found that, at this point, the average online survey result was 44.1 per cent – finding benefits for using (as NSS does) a clearly defined and refined population, pre-contacting participants, and using reminders. A smaller study by Diaker et al (Journal of Survey Statistics and Methodology, 2020) found that, in general, online surveys yield lower response rates (on average, 12 percentage point lower) than other approaches.

    Interestingly, Holton et al (Human Relations, 2022) show an increase in response rates over time in a sample of 1014 journals, and do not find a statistically significant difference linked to survey modes.

    ONS itself works with the ESRC-funded Survey Futures project, which:

    aims to deliver a step change in survey research to ensure that it will remain possible in the UK to carry out high quality social surveys of the kinds required by the public and academic sectors to monitor and understand society, and to provide an evidence base for policy

    It feels like timely stuff. Nine strands of work in the first phase included work on mode effects, and on addressing non-response.

    Fixing surveys

    ONS have been taking steps to repair LFS – implementing some of the recontacting/reminder approaches that have been successfully implemented and documented in the academic literature. There’s a renewed focus on households that include young people, and a return to the larger sample sizes we saw during the pandemic (when the whole survey had to be conducted remotely). Reweighting has led to a bunch of tweaks to the way samples are chosen, and non-responses accounted for.

    Longer term, the Transformed Labour Force Survey (TLFS) is already being trialed, though the initial March 2024 plans for full introduction has been revised to allow for further testing – important given a bias towards older age group responses, and an increased level of partial responses. Yes, there’s a lessons learned review. The old LFS and the new, online first, TLFS will be running together at least until early 2025 – with a knock on impact on APS.

    But it is worth bearing in mind that, even given the changes made to drive up responses, trial TLFS response rates have been hovering around just below 40 per cent. This is a return to 2020 levels, addressing some of the recent damage, but a long way from the historic norm.

    Survey fatigue

    More usually the term “survey fatigue” is used to describe the impact of additional questions on completion rate – respondents tire during long surveys (as Jeong et al observe in the Journal of Development Economics) and deliberately choose not to answer questions to hasten the end of the survey.

    But it is possible to consider the idea of a civilisational survey fatigue. Arguably, large parts of the online economy are propped up on the collection and reuse of personal data, which can then be used to target advertisements and reminders. Increasingly, you now have to pay to opt out of targeted ads on websites – assuming you can view the website at all without paying. After a period of abeyance, concerns around data privacy are beginning to reemerge. Forms of social media that rely on a constant drive to share personal information are unexpectedly beginning to struggle – for younger generations participatory social media is more likely to be a group chat or discord server, while formerly participatory services like YouTube and TikTok have become platforms for media consumption.

    In the world of public opinion research the struggle with response rates has partially been met via a switch from randomised phone or in-person to the use of pre-vetted online panels. This (as with the rise of focus groups) has generated a new cadre of “professional respondents” – with huge implications for the validity of polling even when weighting is applied.

    Governments and industry are moving towards administrative data – the most recognisable example in higher education being the LEO dataset of graduate salaries. But this brings problems in itself – LEO lets us know how much income graduates pay tax on from their main job, but deals poorly with the portfolio careers that are the expectation of many graduates. LEO never cut it as a policymaking tool precisely because of how broadbrush it is.

    In a world where everything is data driven, what happens when the quality of data drops? If we were ever making good, data-driven decisions, a problem with the raw material suggests a problem with the end product. There are methodological and statistical workarounds, but the trend appears to be shifting away from people being happy to give out personal information without compensation. User interaction data – the traces we create as we interact with everything from ecommerce to online learning – are for now unaffected, but are necessarily limited in scope and explanatory value.

    We’ve lived through a generation where data seemed unlimited. What tools do we need to survive a data dark age?

    Source link

  • Collaboration is key when it comes to addressing harassment and sexual misconduct

    Collaboration is key when it comes to addressing harassment and sexual misconduct

    In all of the noise about the OfS’s new regulation on harassment and sexual misconduct there’s one area where the silence is notable and disappointing – sector collaboration.

    Back in 2022, the independent evaluation of the OfS statement of expectations on harassment and sexual misconduct made a clear recommendation that OfS and DfE “foster more effective partnership working both between HE providers and with those external to the sector. Now, having published details of the new condition E6 and the accompanying guidance, this seems to have been largely forgotten.

    There’s a nod to the potential benefit of collaboration in OfS’s analysis of consultation responses, but it only goes as far as to say that providers “may wish to identify collective steps” – with little explanation of what this could look like and no intention or commitment to proactively support this.

    This feels like a significant oversight, and one that is disappointing to say the least. It’s become clear from our work with IHE members that collaboration needs to be front and centre if we have any hope as a sector of delivering in this area. Without it, some providers – especially smaller ones – will not be able to meet the new requirements, creating risk and failing to achieve the consistency of practice and experience that students expect. This feels even more true given the current context of widespread financial insecurity. Any new regulation ought to be presenting mechanisms and incentives to collaborate – and reduce costs in doing so.

    Working together for a stronger sector – or only sometimes?

    The silence around collaboration is also surprising, given that in other spheres it is seen to be – and in many cases is – the solution to institutions meeting regulatory requirements and student expectations. John Blake’s latest speech on a regional approach to access and participation is just one example of this. There is implicit recognition that in this era of “diminishing resources”, working together is the solution. There’s also the recognition that partnership working needs funding – more on that later.

    It’s also surprising given that OfS has made clear that both providers in any academic partnership are responsible for compliance with the new condition, including where there’s a franchise arrangement. This seems like an open door for collaborative approaches, given that over half the providers on the register do not have their own degree awarding powers. However, as usual, it is unclear what this means in practice. There is no reference in the regulation to how the OfS would view any collaborative efforts, or examples of what this might look like in practice.

    Academic partnerships make logical collaborators

    IHE’s recent project on academic partnerships demonstrates the potential of such arrangements for collaboration that benefits both providers and their students. Our research found a number of innovative models where awarding institutions facilitated collaboration with and between their academic partners in areas including shared learning opportunities and use of shared platforms.

    There’s a clear opportunity here when it comes to staff training. All institutions need to have staff who are “appropriately trained”. Training in areas such as receiving disclosures and conducting investigations benefits from group delivery – where staff can learn from each other. A small provider might only have one or two staff who require it, meaning they are unlikely to draw much benefit from this. It would also make such training prohibitively expensive. It’s likely to need to be delivered by an external organisation (to ensure the “credible and demonstrable expertise” required) and such solutions aren’t scaled to an institution with just a handful of relevant staff. Awarding institutions sharing such group training would solve this – and also benefit shared processes in that staff across both institutions have the same level of knowledge and competence.

    A further benefit of shared training would be that partners could share staff when investigations need greater independence than a small provider can offer. This could be staff from the awarding partner, or another academic partner. This would effectively bring together useful knowledge of institutional context, policies and processes with the necessary external objectivity to run a credible investigation.

    Another opportunity for collaboration is in shared online reporting tools. These can be an effective way of encouraging disclosure, but such systems are often not scaled for small institutions. As well as being more cost-effective, sharing these could lead to greater confidence of students reporting in the independence of tool and the process that follows.

    Think local – for everyone’s sake!

    Regional or local collaboration is the other area with the potential to benefit students, providers, and other services supporting those who experience harassment or sexual misconduct.

    Local or regional collaboration on reporting and investigation can support disclosure by creating more independence in the system. The independent evaluation spoke specifically of this, recommending the facilitation of

    formal or informal shared services, such as regional support networks, and in particular regional investigation units or hubs.

    And it would enable more effective partnerships with external support services. Rather than every provider trying to establish a partnership with a local service (putting a greater burden on groups who are often charities or not-for-profits), group collaborations could streamline this. This needs to include all types of provider, including small providers and FE colleges delivering HE. This would be more efficient, reduce unhelpful competition for the limited resource of the service, and ensure that all students have access to these support services irrespective of their place of study.

    Where there aren’t local services, providers could pool resource and expertise to develop and deliver these. This would reduce competition for specialist staff in the same geographic location, and again ensure parity of support for students across providers.

    It’s important that such collaborations involve all parts of the sector, including small providers – with the burden of their participation reflective of their smaller size. This is vital to ensure that collaborative models are cost effective for everyone.

    Getting it right on student engagement

    Collaborative approaches are also going to be critical to make sure we get it right on student engagement. The OfS expectation is clear that providers work with students and their representatives to develop policies and procedures. But what happens when an institution doesn’t have an SU, or a formal representative structure, or the necessary experience in student engagement to do this? There’s a risk that it won’t be done properly or be done at all.

    We need to consider how we facilitate students to support each other to engage in co-production. This could include sharing staff or exploring the development of local student union services that bring in smaller providers or FE colleges without the means to partner with students in the way that is needed.

    Making it happen

    The sort of collaboration outlined above will need more than just the goodwill of institutions to make it happen. It needs regulatory backing, with more explicit recognition of the value of these approaches and guidance on what this might look like in practice. We also need to recognise that it’s costly.

    Catalyst funding, like that provided back in 2019, would represent far better value to the sector than asking individual providers to fund collaboration. The risk is that without it, the burden of developing a system that works for all students at all providers will be left to the smallest institutions who need these collaborative options the most. Funding would also boost evaluation and resource sharing across the sector. It could consider the benefits of collaborative approaches between awarding and teaching institutions as well as regional structures which ensure a greater parity of support across providers large and small.

    Somewhere on this path to regulation we lost the perspective that harassment and sexual misconduct is a societal issue. What we do now to educate, prevent harm to and support students will have a lasting impact on the future as students become employees, employers, parents and educators themselves. It is not a task to be shouldered alone.

    Source link

  • NM Vistas: What’s New in State Student Achievement Data?

    NM Vistas: What’s New in State Student Achievement Data?

    By Mo Charnot
    For NMEducation.com

    The New Mexico Public Education Department has updated its student achievement data reporting website — NM Vistas — with a renovated layout and school performance data from the 2023-2024 academic year, with expectations for additional information to be released in January 2025. 

    NM Vistas is crucial to informing New Mexicans about school performance and progress at the school, district and state levels through yearly report cards. The site displays student reading, math and science proficiency rates taken from state assessments, as required by the federal Every Student Succeeds Act. Districts and schools receive scores between 0 and 100 based on performance, and schools also receive designations indicating the level of support the school requires to improve.

    Other information on the site includes graduation rates, attendance and student achievement growth. Data also shows rates among specific student demographics, including race, gender, disability, economic indicators and more. 

    PED Deputy Secretary of Teaching, Learning and Innovation, Amanda DeBell told NM Education in an interview that this year’s recreation of the NM Vistas site came from a desire to go beyond the state’s requirements for school performance data.

    “We knew that New Mexico VISTAs had a ton of potential to be a tool that our communities could use,” DeBell said. 

    One new data point added to NM Vistas this year is early literacy rates, which measures the percentage of students in grades K-2 who are reading proficiently at their grade level. Currently, federal law only requires proficiency rates for grades 3-8 to be published, and New Mexico also publishes 11th grade SAT scores. In the 2023-2024 school year, 34.6% of students grades K-2 were proficient in reading, the data says.

    DeBell said several advisory groups encouraged the PED to report early literacy data through NM Vistas.

    “We were missing some key data-telling opportunities by not publishing the early literacy [rates] on our website, so we made a real effort to get those early literacy teachers the kudos that they deserve by demonstrating the scores,” DeBell said.

    The PED also added data on individual schools through badges indicating specific programs and resources the school offers. For example, Ace Leadership High School in Albuquerque has two badges: one for being a community school offering wraparound services to students and families, and another for qualifying for the career and technical education-focused Innovation Zone program.

    “What we are really trying to do is provide a sort of one-stop shopping for families and community members to highlight all of the work that schools are doing,” DeBell said.

    The updated NM Vistas website has removed a few things as well, most notably the entire 2021-2022 NM Vistas data set. DeBell said this was because the PED changed the way it measured student growth data, which resulted in the 2021-2022 school year’s data being incomparable to the most recent two years. 

    “You could not say that the schools in 2021-2022 were doing the same as 2022-2023 or 2023-2024, because the mechanism for calculating their scores was different,” DeBell said.

    However, this does leave NM Vistas with less data overall, only allowing viewers to compare scores from the latest data set to last year’s. 

    In January 2025, several new indicators are expected to be uploaded to the site, including:

    • Student performance levels: Reports the percentage of students who are novices, nearing proficiency, proficient and advanced in reading, math and science at each school, rather than only separating between proficient and not proficient.
    • Results for The Nation’s Report Card (also known as NAEP): Compares student proficiencies between US states.
    • Educator qualifications: DeBell said this would include information on individual schools’ numbers of newer teachers, substitute teachers covering vacancies and more.
    • College enrollment rates: only to be statewide numbers indicating the percentage of New Mexico students attending college after graduating, but DeBell said she later hopes the PED can narrow down by each K-12 school.
    • Per-pupil spending: How much money each school, district and the state spends per-student on average. 
    • School climate: Links the viewer to results of school climate surveys asking students, parents and teachers how they feel about their school experience.
    • Alternate assessment participation: Percentage of students who take a different assessment in place of the NM-MSSA or SAT.

    “We want VISTAs to be super, super responsive, and we want families to be able to use this and get good information,” DeBell said. “We will continue to evolve this until it’s at its 100th iteration, if it takes that much.”

    This year, the PED released statewide assessment results for the 2023-2024 school year to NM Vistas on Nov. 15. Results show 39% of New Mexico students are proficient in reading, 23% are proficient in math and 38% are proficient in science. Compared to last year’s scores, reading proficiency increased by 1%, math proficiency decreased by 1% and science proficiency increased by 4%.

    Source link

  • Lessons Learned from the AI Learning Designer Project –

    Lessons Learned from the AI Learning Designer Project –

    We recently wrapped up our AI Learning Design Assistant (ALDA) project. It was a marathon. Multiple universities and sponsors participated in a seven-month intensive workshop series to learn how AI can assist in learning design. The ALDA software, which we tested together as my team and I built it, was an experimental apparatus designed to help us learn various lessons about AI in education.

    And learn we did. As I speak with project participants about how they want to see the work continue under ALDA’s new owner (and my new employer), 1EdTech, I’ll use this post to reflect on some lessons learned so far. I’ll finish by reflecting on possible futures for ALDA.

    (If you want a deeper dive from a month before the last session, listen to Jeff Young’s podcast interview with me on EdSurge. I love talking with Jeff. Shame on me for not letting you know about this conversation sooner.)

    AI is a solution that needs our problems

    The most fundamental question I wanted to explore with the ALDA workshop participants was, “What would you use AI for?” The question was somewhat complicated by AI’s state when I started development work about nine months ago. Back then, ChatGPT and its competitors struggled to follow the complex directions required for serious learning design work. While I knew this shortcoming would resolve itself through AI progress—likely by the time the workshop series was completed—I had to invest some of the ALDA software development effort into scaffolding the AI to boost its instruction-following capabilities at the time. I needed something vaguely like today’s AI capabilities back then to explore the questions we were trying to answer. Such as what we could be using AI for a year from then.

    Once ALDA could provide that performance boost, we came to the hard part. The human part. When we got down to the nitty-gritty of the question—What would you use this for?—many participants had to wrestle with it for a while. Even the learning designers working at big, centralized, organized shops struggled to break down their processes into smaller steps with documents the AI could help them produce. Their human-centric rules relied heavily on the humans to interpret the organizational rules as they worked organically through large chunks of design work. Faculty designing their own courses had a similar struggle. How is their work segmented? What are the pieces? Which pieces would they have an assistant work on if they had an assistant?

    The answers weren’t obvious. Participants had to discover them by experimenting throughout the workshop series. ALDA was designed to make that discovery process easier.

    A prompt engineering technique for educators: Chain of Inquiry

    Along with the starting question, ALDA had a starting hypothesis: AI can function as a junior learning designer.

    How does a junior learning designer function? It turns out that their primary tool is a basic approach that makes sense in an educator’s context and translates nicely into prompt engineering for AI.

    Learning designers ask their teaching experts questions. They start with general ones. Who are your students? What is your course about? What are the learning goals? What’s your teaching style?

    These questions get progressively more specific. What are the learning objectives for this lesson? How do you know when students have achieved those objectives? What are some common misconceptions they have?

    Eventually, the learning designer has built a clear enough mental model that they can draft a useful design document of some form or other.

    Notice the similarities and differences between this approach and scaffolding a student’s learning. Like scaffolding, Chain of Inquiry moves from the foundational to the complex. It’s not about helping the person being scaffolded with their learning, but it is intended to help them with their thinking. Specifically, the interview progression helps the educator being interviewed think more clearly about hard design problems by bringing relevant context into focus. This process of prompting the interviewee to recall salient facts relevant to thinking through challenging, detailed problems is very much like the AI prompt engineering strategy called Chain of Thought.

    In the interview between the learning designer and the subject-matter expert, the chain of thought they spin together is helpful to both parties for different reasons. It helps the learning designer learn while helping the subject-matter expert recall relevant details that help with thinking. The same is true in ALDA. The AI is learning from the interview, while the same process helps both parties focus on helpful context. I call this AI interview prompt style Chain of Inquiry. I hadn’t seen it used when I first thought of ALDA and haven’t seen it used much since then, either.

    In any case, it worked. Participants seem to grasp it immediately. Meanwhile, a well-crafted Chain of Inquiry prompt in ALDA produced much better documents after it elicited good information through interviews with its human partners.

    Improving mental models helps

    AI is often presented, sold, and designed to be used as a magic talking machine. It’s hard to imagine what you would and wouldn’t use a tool for if you don’t know what it does. We went at this problem through a combination of teaching, user interface design, and guided experimentation.

    On the teaching side, I emphasized that a generative AI model is a sophisticated pattern-matching and completion machine. If you say “Knock knock” to it, it will answer “Who’s there?” because it knows what usually comes after “Knock knock.” I spent some time building up this basic idea, showing the AI matching and completing more and more sophisticated patterns. Some participants initially reacted to this lesson as “not useful” or “irrelevant.” But it paid off over time as participants experienced that understanding helped them think more clearly about what to expect from the AI, with some additional help from ALDA’s design.

    ALDA’s basic structure is simple:

    1. Prompt Templates are re-usable documents that define the Chain of Inquiry interview process (although they are generic enough to support traditional Chain of Thought as well).
    2. Chats are where those interviews take place. This part of ALDA is similar to a typical ChatGPT-like experience, except that the AI asks questions first and provides answers later based on the instructions it receives from the Prompt Template.
    3. Lesson Drafts are where users can save the last step of a chat, which hopefully will be the draft of some learning design artifact they want to use. These drafts can be downloaded as Word or PDF documents and worked on further by the human.

    A lot of the magic of ALDA is in the prompt template page design. It breaks down the prompts into three user-editable parts:

    1. General Instructions provide the identity of the chatbot that guides its behavior, e.g., “I am ALDA, your AI Learning Design Assistant. My role is to work with you as a thoughtful, curious junior instructional designer with extensive training in effective learning practices. Together, we will create a comprehensive first draft of curricular materials for an online lesson. I’ll assist you in refining ideas and adapting to your unique context and style.

      “Important: I will maintain an internal draft throughout our collaboration. I will not display the complete draft at the end of each step unless you request it. However, I will remind you periodically that you can ask to see the full draft if you wish.

      “Important Instruction: If at any point additional steps or detailed outlines are needed, I will suggest them and seek your input before proceeding. I will not deviate from the outlined steps without your approval.

    2. Output template provides an outline of the document that the AI is instructed to produce at the end of the interview.
    3. Steps provide the step-by-step process for the Chain of Inquiry.

    The UI reinforces the idea of pattern matching and completion. The Output Template gives the AI the structure of the document it is trying to complete by the end of the chat. The General Instructions and Steps work together to define the interview pattern the system should imitate as it tries to complete the document.

    Armed with the lesson and scaffolded by the template, participants got better over time at understanding how to think about asking the AI to do what they wanted it to do.

    Using AI to improve AI

    One of the biggest breakthroughs came with the release of a feature near the very end of the workshop series. It’s the “Improve” button at the bottom of the Template page.

    When the user clicks on that button, it sends whatever is in the template to ChatGPT. It also sends any notes the user enters, along with some behind-the-scenes information about how ALDA templates are structured.

    Template creators can start with a simple sentence or two in the General Instructions. Think of it as a starting prompt, e.g., “A learning design interview template for designing and drafting a project-based learning exercise.” The user can then tell “Improve” to create a full template based on that prompt. Because ALDA tells ChatGPT what a complete template looks like, the AI returns a full draft of all the fields ALDA needs to create a template. The user can then test that template and go back to the Improve window to ask for the AI to improve the template’s behavior or extend its functionality.

    Building this cycle into the process created a massive jump in usage and creativity among the participants who used it. I started seeing more and more varied templates pop up quickly. User satisfaction also improved significantly.

    So…what is it good for?

    The usage patterns turned out to be very interesting. Keep in mind that this is a highly unscientific review; while I would have liked to conduct a study or even a well-designed survey, the realities of building this on the fly as a solo operator managing outsourced developers limited me to anecdata for this round.

    The observations from the learning designers from large, well-orchestrated teams seem to line up with my theory that the big task will be to break down our design processes into chunks that are friendly to AI support. I don’t see a short-term scenario in which we can outsource all learning design—or replace it—with AI. (By the way, “air gapping” the AI, by which I mean conducting an experiment in which nothing the AI produced would reach students without human review, substantially reduced anxieties about AI and improved educators’ willingness to experiment and explore the boundaries.)

    For the individual instructors, particularly in institutions with few or no learning designers, I was pleasantly surprised to discover how useful ALDA proved to be in the middle of the term and afterward. We tend to think about learning design as a pre-flight activity. The reality is that educators are constantly adjusting their courses on the fly and spending time at the end to tweak aspects that didn’t work the way they liked. I also noticed that educators seemed interested in using AI to make it safer for them to try newer, more challenging pedagogical experiments like project-based learning or AI-enabled teaching exercises if they had ALDA as a thought partner that could both accelerate the planning and bring in some additional expertise. I don’t know how much of this can be attributed to the pure speed of the AI-enabled template improvement loop and how much the holistic experience helped them feel they understood and had control over ALDA in a way that other tools may not offer them.

    Possible futures for ALDA under 1EdTech

    As for what comes next, nothing has been decided yet. I haven’t been blogging much lately because I’ve been intensely focused on helping the 1EdTech team think more holistically about the many things the organization does and many more that we could do. ALDA is a piece of that puzzle. We’re still putting the pieces in place to determine where ALDA fits in.

    I’ll make a general remark about 1EdTech before exploring specific possible futures for ALDA. Historically, 1EdTech has solved problems that many of you don’t (and shouldn’t) know you could have. When your students magically appear in your LMS and you don’t have to think about how your roster got there, that was because of us. When you switch LMSs, and your students still magically appear, that’s 1EdTech. When you add one of the million billion learning applications to your LMS, that was us too. Most of those applications probably wouldn’t exist if we hadn’t made it easy for them to integrate with any LMS. In fact, the EdTech ecosystem as we know it wouldn’t exist. However much you may justifiably complain about the challenges of EdTech apps that don’t work well with each other, without 1EdTech, they mostly wouldn’t work with each other at all. A lot of EdTech apps simply wouldn’t exist for that reason.

    Still. That’s not nearly enough. Getting tech out of your way is good. But it’s not good enough. We need to identify real, direct educational problems and help to make them easier and more affordable to solve. We must make it possible for educators to keep up with changing technology in a changing world. ALDA could play several roles in that work.

    First, it could continue to function as a literacy teaching tool for educators. The ALDA workshops covered important aspects of understanding AI that I’ve not seen other efforts cover. We can’t know how we want AI to work in education without educators who understand and are experimenting with AI. I will be exploring with ALDA participants, 1EdTech members, and others whether there is the interest and funding we need to continue this aspect of the work. We could wrap some more structured analysis around future workshops to find out what the educators are learning and what we can learn from them.

    Speaking of which, ALDA can continue to function as an experimental apparatus. Learning design is a process that is largely dark to us. It happens in interviews and word processor documents on individual hard drives. If we don’t know where people need the help—and if they don’t know either—then we’re stuck. Product developers and innovators can’t design AI-enabled products to solve problems they don’t understand.

    Finally, we can learn the aspects of learning design—and teaching—that need to be taught to AI because the knowledge it needs isn’t written down in a form that’s accessible to it. As educators, we learn a lot of structure in the course of teaching that often isn’t written down and certainly isn’t formalized in most EdTech product data structures. How and when to probe for a misconception. What to do if we find one. How to give a hint or feedback if we want to get the student on track without giving away the answer. Whether you want your AI to be helping the educator or working directly with the student—which is not really an either/or question—we need AI to better understand how we teach and learn if we want it to get better at helping us with those tasks. Some of the learning design structures we need are related to deep aspects of how human brains work. Other structures evolve much more quickly, such as moving to skills-based learning. Many of these structures should be wired deep into our EdTech so you don’t have to think or worry about them. EdTech products should support them automatically. Something like ALDA could be an ongoing laboratory in which we test how educators design learning interventions, how those processes co-evolve with AI over time, and where feeding the AI evidence-based learning design structure could make it more helpful.

    The first incarnation ALDA was meant to be an experiment in the entrepreneurial sense. I wanted to find out what people would find useful. It’s ready to become something else. And it’s now at a home where it can evolve. The most important question about ALDA hasn’t changed all that much:

    What would you find ALDA at 1EdTech useful for?

    Source link

  • Recommendations for States to Address Postsecondary Affordability

    Recommendations for States to Address Postsecondary Affordability

    Authors: Lauren Asher, Nate Johnson, Marissa Molina, and Kristin D. Hultquist

    Source: HCM Strategists

    An October 2024 report, Beyond Sticker Prices: How States Can Make Postsecondary Education More Affordable, reviews data to evaluate affordability of postsecondary education across nine states, including Alabama, California, Indiana, Louisiana, Ohio, Oklahoma, Texas, Virginia, and Washington.

    The authors emphasize the importance of considering net price, or the full cost of attendance less total aid. Depending on the state, low-income students pay 16-27 percent of their total family income to attend community college.

    At public four-year colleges with high net prices, students with family income of $30,000-48,000 py more than half of their income (51-53 percent) for school in two of the nine states. Four-year colleges with low net prices show cost variability based on whether a student is the lowest income, earning $0-30,000, or has $30,000-48,000 in income. Students in the former group pay 21-27 percent of their family income toward education, while students in the latter group pay 40-41 percent of their income.

    The brief recommends that policymakers take the following issues into account:

    • The way states fund public institutions is critical for low-income students. Consider increasing funding for community colleges as well as evaluating how student income factors into allocation of state funds.
    • Tuition policy is integral to making decisions about postsecondary education. Public perception of college affordability is influenced by tuition costs. States have the power to set limits on how much institutions can raise or change costs, but states also must be careful not to limit institutions from charging what they require to adequately support students’ needs.
    • Transparency and consistency among financial aid programs increase their reach. States should consider making financial aid programs more readily understandable. State financial aid policies should also increase flexibility to adjust for transferring, length of time to graduate, and financial aid from other sources.
    • How states support basic needs affects students’ ability to afford attending college. Policies at the state level can offer students more options for paying for food, housing, caregiving, and more.

    To read the full report, click here.

    Kara Seidel


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • From Pause to Progress: Predictors of Success and Hurdles for Returning Students

    From Pause to Progress: Predictors of Success and Hurdles for Returning Students

    Title: Some College, No Credential Learners: Measuring Enrollment Readiness

    Source: Straighterline, UPCEA

    UPCEA and StraighterLine carried out a survey to examine the driving factors, obstacles, preparedness, and viewpoints of individuals who started but did not finish a degree, certificate, technical, or vocational program. This population, according to the National Student Clearinghouse Research Center, has grown to 36.8 million, a 2.9 percent increase from the year prior. A total of 1,018 participants completed the survey.

    Key findings related to respondents’ readiness to re-enroll include:

    • Predictive factors: Mental resilience, routine readiness, a positive appraisal of institutional communication, and belief in the value of a degree strongly predict re-enrollment intentions.
    • Academic preparedness: A majority of respondents (88 percent) feel proficient in core academic skills (e.g., reading, writing, math, critical thinking), and 86 percent feel competent using technology for learning tasks.
    • Financial readiness: More than half (58 percent) believe they cannot afford tuition and related expenses, while only 22 percent feel financially prepared.
    • Career and personal motivations: The top motivators for re-enrolling include improving salary (53 percent), personal goals (44 percent), and pursuing a career change (38 percent).
    • Beliefs in higher education: Trust in higher education declines after stopping out. While 84 percent of those who had been enrolled in a degree program initially believed a degree was essential for their career goals, only 34 percent still hold that belief. Additionally, just 42 percent agree that colleges are trustworthy.
    • Grit readiness: Four in five respondents feel adaptable and persistent through challenges, and 71 percent say they can handle stress effectively.
    • Flexibility and adaptability: Three-fourths of respondents are open to changing routines and adjusting to new environments.
    • Learning environment: Half of respondents report having access to a study-friendly environment, but 11 percent report not having such access.
    • Time management: Nearly two-thirds are prepared to dedicate the necessary time and effort to complete their education.
    • Support systems: About three in every five respondents receive family support for continuing education, but only 31 percent feel supported by their employers.

    Key findings related to enrollment funnel experiences include:

    • Preferred communication channels: When inquiring about a program, 86 percent of respondents like engaging via email, 42 percent through phone calls, and 39 percent via text messages, while only 6 percent want to use a chatbot.
    • Timeliness and quality of communication: A majority (83 percent) agree or strongly agree that the communication they received when reaching out to a college or university about a program was timely, and 80 percent found it informative.
    • Enrollment experience: Among those who re-enrolled, 88 percent found that the enrollment process was efficient, 84 percent felt adequately supported by their institution, and 78 percent found the process easy.
    • Challenges from inquiry to enrollment: Nearly one-third (31 percent) encountered difficulties with financial aid support, 29 percent experienced delays in getting their questions answered, and 21 percent reported poor communication from the institution.

    Click to read the full white paper.

    —Nguyen DH Nguyen


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • What is academic freedom? With Keith Whittington

    What is academic freedom? With Keith Whittington

    “Who controls what is taught in American universities
    — professors or politicians?”

    Yale Law professor Keith Whittington answers this
    timely question and more in his new book, “You Can’t Teach That!
    The Battle over University Classrooms.” He joins the podcast to
    discuss the history of academic freedom, the difference between
    intramural and extramural speech, and why there is a
    “weaponization” of intellectual diversity.

    Keith E. Whittington is the David Boies Professor of
    Law at Yale Law School. Whittington’s teaching and scholarship span
    American constitutional theory, American political and
    constitutional history, judicial politics, the presidency, and free
    speech and the law.


    Read the transcript.

    Timestamps:

    00:00 Intro

    02:00 The genesis of Yale’s Center for Academic
    Freedom and Free Speech

    04:42 The inspiration behind “You Can’t Teach
    That!”

    06:18 The First Amendment and academic freedom

    09:29 Extramural speech and the public sphere

    17:56 Intramural speech and its complexities

    23:13 Florida’s Stop WOKE Act

    26:34 Distinctive features of K-12 education

    31:13 University of Pennsylvania professor Amy Wax

    39:02 University of Kansas professor Phillip
    Lowcock

    43:42 Muhlenberg College professor Maura
    Finkelstein

    47:01 University of Wisconsin La-Crosse professor Joe
    Gow

    54:47 Northwestern professor Arthur Butz

    57:52 Inconsistent applications of university
    policies

    01:02:23 Weaponization of “intellectual diversity”

    01:05:53 Outro

    Show notes:

    Source link