Tag: Wonkhe

  • Making SEISA official | Wonkhe

    Making SEISA official | Wonkhe

    Developing a new official statistic is a process that can span several years.

    Work on SEISA began in 2020 and this blog outlines the journey to official statistics designation and some key findings that have emerged along the way. Let’s firstly recap why HESA needed a new deprivation index.

    The rationale behind pursuing this project stemmed from an Office for Statistics Regulation (OSR) report which noted that post-16 education statistics lacked a UK-wide deprivation metric. Under the Code of Practice for Statistics, HESA are required to innovate and fill identified statistical gaps that align with our area of specialism.

    Fast forward almost six years and the UK Statistics Authority have reiterated the importance of UK-wide comparable statistics in their response to the 2024 Lievesley Review.

    Breaking down barriers

    While higher education policy may be devolved, all nations have ambitions to ensure there is equal opportunity for all. Policymakers and the higher education sector agree that universities have a pivotal role in breaking down barriers to opportunity and that relevant data is needed to meet this mission. Having UK-wide comparable statistics relating to deprivation based on SEISA can provide the empirical evidence required to understand where progress is being made and for this to be used across the four nations to share best practice.

    In developing SEISA, we referred to OSR guidance to produce research that examines the full value of a new statistic before it is classed as an ‘official statistic in development’. We published a series of working papers in 2021 and 2022, with the latter including comparisons to the Indices of Deprivation (the main area-based measure utilised among policymakers at present). We also illustrated why area-based measures remain useful in activities designed to promote equal opportunity.

    Our research indicated that the final indexes derived from the Indices of Deprivation in each nation were effective at catching deprived localities in large urban areas, such as London and Glasgow, but that SEISA added value by picking up deprivation in towns and cities outside of these major conurbations. This included places located within former mining, manufacturing and industrial communities across the UK, like Doncaster or the Black Country in the West Midlands, as well as Rhondda and Caerphilly in Wales. The examples below come from our interactive maps for SEISA using Census 2011 data.

    An area of Doncaster that lies within decile 4 of the English Index of Multiple Deprivation (2019)

    An area of Caerphilly that lies within decile 5 of the Welsh Index of Multiple Deprivation (2019)

    We also observed that SEISA tended to capture a greater proportion of rural areas in the bottom quintile when compared with the equivalent quintile of the Index of Multiple Deprivation in each nation.

    Furthermore, in Scotland, the bottom quintile of the Scottish Index of Multiple Deprivation does not contain any locations in the Scottish islands, whereas the lowest quintile of SEISA covers all council areas in the country. These points are highlighted by the examples below from rural Shropshire and the Shetland Islands, which also show the benefit that SEISA offers by being based on smaller areas (in terms of population size) than those used to form the Indices of Deprivation. That is, drawing upon a smaller geographic domain enables pockets of deprivation to be identified that are otherwise surrounded by less deprived neighbourhoods.

    A rural area of Shropshire that is placed in decile 5 of the English Index of Multiple Deprivation (2019)

    An area of the Shetland Islands that is within decile 7 of the Scottish Index of Multiple Deprivation (2020)

    Becoming an official statistic

    Alongside illustrating value, our initial research had to consider data quality and whether our measure correlated with deprivation as expected. Previous literature has highlighted how the likelihood of experiencing deprivation increases if you are a household that is;

    • On a low income
    • Lives in social housing
    • A lone parent family
    • In poor health

    Examining how SEISA was associated with these variables gave us the assurance that it was ready to become an ‘official statistic in development’. As we noted when we announced our intention for the measure to be assigned this badge for up to two years, a key factor we needed to establish during this time period was the consistency in the findings (and hence methodological approach) when Census 2021-22 data became available in Autumn 2024.

    Recreating SEISA using the latest Census records across all nations, we found there was a high level of stability in the results between the 2011 and 2021-22 Census collections. For instance, our summary page shows the steadiness in the associations between SEISA and income, housing, family composition and health, with an example of this provided below.

    The association between SEISA and family composition in Census 2011 and 2021-22

    Over the past twelve months, we’ve been gratified to see applications of SEISA in the higher education sector and beyond. We’ve had feedback on how practitioners are using SEISA to support their widening participation activities in higher education and interest from councils working on equality of opportunity in early years education. The measure is now available via the Local Insight database used by local government and charities to source data for their work.

    It’s evident therefore that SEISA has the potential to help break down barriers to opportunity across the UK and is already being deployed by data users to support their activities. The demonstrable value of SEISA and its consistency following the update to Census 2021-22 data mean that we can now remove the ‘in development’ badge and label SEISA as an official statistic.

    View the data for SEISA based on the Census 2021-22 collection, alongside a more detailed insight into why SEISA is now an official statistic, on the HESA website.

    Please feel free to submit any feedback you have on SEISA to [email protected].

    Read HESA’s latest research releases and if you would like to be kept updated on future publications, you can sign-up to our mailing list.

    Source link

  • Keep talking about data | Wonkhe

    Keep talking about data | Wonkhe

    How’s your student data this morning?

    Depending on how close you sit to your institutional student data systems, your answer may range from a bemused shrug to an anguished yelp.

    In the most part, we remain blissfully unaware of how much work it currently takes to derive useful and actionable insights from the various data traces our students leave behind them. We’ve all seen the advertisements promising seamless systems integration and a tangible improvement in the student experience, but in most cases the reality is far different.

    James Gray’s aim is to start a meaningful conversation about how we get there and what systems need to be in place to make it happen – at a sector as well as a provider level. As he says:

    There is a genuine predictive value in using data to design future solutions to engage students and drive improved outcomes. We now have the technical capability to bring content, data, and context together in a way that simply has not been possible before now.”

    All well and good, but just because we have the technology doesn’t mean we have the data in the right place or the right format – the problem is, as Helen O’Sullivan has already pointed out on Wonkhe, silos.

    Think again about your student data.

    Some of it is in your student information system (assessment performance, module choices), which may or may not link to the application tracking systems that got students on to courses in the first place. You’ll also have information about how students engage with your virtual learning environment, what books they are reading in the library, how they interact with support services, whether and how often they attend in person, and their (disclosed) underlying health conditions and specific support needs.

    The value of this stuff is clear – but without a whole-institution strategic approach to data it remains just a possibility. James notes that:

    We have learned that a focus on the student digital journey and institutional digital transformation means that we need to bring data silos together, both in terms of use and collection. There needs to be a coherent strategy to drive deployment and data use.

    But how do we get there? From what James has seen overseas, in the big online US providers like Georgia Tech and Arizona State data is something that is managed strategically at the highest levels of university leadership. It’s perhaps a truism to suggest that if you really care about something it needs ownership at a senior level, but having that level of buy-in unlocks the resource and momentum that a big project like this needs.

    We also talked about the finer-grained aspects of implementation – James felt that the way to bring students and staff on board is to clearly demonstrate the benefits, and listen (and respond) to concerns. That latter is essential because “you will annoy folks”.

    Is it worth this annoyance to unlock gains in productivity and effectiveness? Ideally, we’d all be focused on getting the greatest benefit from our resources – but often processes and common practices are arranged in sub-optimal ways for historical reasons, and rewiring large parts of someone’s role is a big ask. The hope is that the new way will prove simpler and less arduous, so it absolutely makes sense to focus on potential benefits and their realisation – and bringing in staff voices at the design stage can make for gains in autonomy and job satisfaction.

    The other end of the problem concerns procurement. Many providers have updated their student records systems in recent years in response to the demands of the Data Futures programme. The trend has been away from bespoke and customised solutions and towards commercial off-the-shelf (COTS) procurement: the thinking here being that updates and modifications are easier to apply consistently with a standard install.

    As James outlines, providers are looking at a “buy, build, or partner” decision – and institutions with different goals (and at different stages of data maturity) may choose different options. There is though enormous value in senior leaders talking across institutions about decisions such as these. “We had to go through the same process” James outlined. “In the end we decided to focus on our existing partnership with Microsoft to build a cutting edge data warehouse, and data ingestion, hierarchy and management process leveraging Azure and MS Fabric with direct connectivity to Gen AI capabilities to support our university customers with their data, and digital transformation journey.” – there is certainly both knowledge and hard-won experience out there about the different trade-offs, but what university leader wants to tell a competitor about the time they spent thousands of pounds on a platform that didn’t communicate with the rest of their data ecosystem?

    As Claire Taylor recently noted on Wonkhe there is a power in relationships and networks among senior leaders that exist to share learning for the benefit of many. It is becoming increasingly clear that higher education is a data-intensive sector – so every provider should feel empowered to make one of the most important decisions they will make in the light of a collective understanding of the landscape.

    This article is published in association with Kortext. Join us at an upcoming Kortext LIVE event in London, Manchester and Edinburgh in January and February 2025 to find out more about Wonkhe and Kortext’s work on leading digital capability for learning, teaching and student success.

    Source link

  • Podcast: Visegrad special | Wonkhe

    Podcast: Visegrad special | Wonkhe

    This week on the podcast Jim, Mack and team are on a bus around the Visegrad countries where they’ve been exploring student experience, representation and rights, discounted dorms and a set of countries where students have been leading change.

    Plus Disabled Students UK has its access insights survey out, and we discuss changes to the Renter’s Rights Bill.

    With Katie Jackson, Faculty of Humanities Officer at the University of Manchester SU, Seán Keaney, Academic Officer at University of Limerick Student Life, Gary Hughes, CEO at Durham SU, Mack Marshall, Community and Policy Officer at Wonkhe and presented by Jim Dickinson, Associate Editor at Wonkhe.

    Read more

    On Day -1 of this year’s magical mystery tour around Europe and students, the team come across plenty of protests for democracy, on Day 0 of the tour we find students in the centre of both the past and the future for Hungary, on Day 1 the team put down some roots and build some belonging at camp, on the second evening the team try to work out if they have enough points for a dorm in Slovakia, and on Day 2 the team get community building and pot roasting.

    Source link

  • Podcast special: Writing for Wonkhe

    Podcast special: Writing for Wonkhe

    In this special seasonal edition of the Wonkhe Show, we discuss how you can contribute to the higher education debate by writing for the site.

    Plus we discuss the importance of communicating academic and professional insights to wider audiences, and we take you inside our editorial process – which is all about clear arguments and diverse perspectives.

    With Adam Matthews, Senior Research Fellow at the School of Education at the University of Birmingham, Michael Salmon, News Editor at Wonkhe, David Kernohan, Deputy Editor at Wonkhe and presented by Debbie McVitty, Editor at Wonkhe.

    Higher Education Policy into Practice (Online) PGCert

    Writing for Wonkhe

    Source link

  • The data dark ages | Wonkhe

    The data dark ages | Wonkhe

    Is there something going wrong with large surveys?

    We asked a bunch of people but they didn’t answer. That’s been the story of the Labour Force Survey (LFS) and the Annual Population Survey (APS) – two venerable fixtures in the Office for National Statistics (ONS) arsenal of data collections.

    Both have just lost their accreditation as official statistics. A statement from the Office for Statistical Regulation highlights just how much of the data we use to understand the world around us is at risk as a result: statistics about employment are affected by the LFS concerns, whereas APS covers everything from regional labour markets, to household income, to basic stuff about the population of the UK by nationality. These are huge, fundamental, sources of information on the way people work and live.

    The LFS response rate has historically been around 50 per cent, but it had fallen to 40 per cent by 2020 and is now below 20 per cent. The APS is an additional sample using the LFS approach – current advice suggests that response rates have deteriorated to the extent that it is no longer safe to use APS data at local authority level (the resolution it was designed to be used at).

    What’s going on?

    With so much of our understanding of social policy issues coming through survey data, problems like these feel almost existential in scope. Online survey tools have made it easier to design and conduct surveys – and often design in the kind of good survey development practices that used to be the domain of specialists. Theoretically, it should be easier to run good quality surveys than ever before – certainly we see more of them (we even run them ourselves).

    Is it simply a matter of survey fatigue? Or are people less likely to (less willing to?) give information to researchers for reasons of trust?

    In our world of higher education, we have recently seen the Graduate Outcomes response rate drop below 50 per cent for the first time, casting doubt as to its suitability as a regulatory measure. The survey still has accredited official statistics status, and there has been important work done on understanding the impact of non-response bias – but it is a concerning trend. The national student survey (NSS) is an outlier here – it has a 72 per cent response rate last time round (so you can be fairly confident in validity right down to course level), but it does enjoy an unusually good level of survey population awareness even despite the removal of a requirement for providers to promote the survey to students. And of course, many of the more egregious issues with HESA Student have been founded on student characteristics – the kind of thing gathered during enrollment or entry surveys.

    A survey of the literature

    There is a literature on survey response rates in published research. A meta-analysis by Wu et al (Computers in Human Behavior, 2022) found that, at this point, the average online survey result was 44.1 per cent – finding benefits for using (as NSS does) a clearly defined and refined population, pre-contacting participants, and using reminders. A smaller study by Diaker et al (Journal of Survey Statistics and Methodology, 2020) found that, in general, online surveys yield lower response rates (on average, 12 percentage point lower) than other approaches.

    Interestingly, Holton et al (Human Relations, 2022) show an increase in response rates over time in a sample of 1014 journals, and do not find a statistically significant difference linked to survey modes.

    ONS itself works with the ESRC-funded Survey Futures project, which:

    aims to deliver a step change in survey research to ensure that it will remain possible in the UK to carry out high quality social surveys of the kinds required by the public and academic sectors to monitor and understand society, and to provide an evidence base for policy

    It feels like timely stuff. Nine strands of work in the first phase included work on mode effects, and on addressing non-response.

    Fixing surveys

    ONS have been taking steps to repair LFS – implementing some of the recontacting/reminder approaches that have been successfully implemented and documented in the academic literature. There’s a renewed focus on households that include young people, and a return to the larger sample sizes we saw during the pandemic (when the whole survey had to be conducted remotely). Reweighting has led to a bunch of tweaks to the way samples are chosen, and non-responses accounted for.

    Longer term, the Transformed Labour Force Survey (TLFS) is already being trialed, though the initial March 2024 plans for full introduction has been revised to allow for further testing – important given a bias towards older age group responses, and an increased level of partial responses. Yes, there’s a lessons learned review. The old LFS and the new, online first, TLFS will be running together at least until early 2025 – with a knock on impact on APS.

    But it is worth bearing in mind that, even given the changes made to drive up responses, trial TLFS response rates have been hovering around just below 40 per cent. This is a return to 2020 levels, addressing some of the recent damage, but a long way from the historic norm.

    Survey fatigue

    More usually the term “survey fatigue” is used to describe the impact of additional questions on completion rate – respondents tire during long surveys (as Jeong et al observe in the Journal of Development Economics) and deliberately choose not to answer questions to hasten the end of the survey.

    But it is possible to consider the idea of a civilisational survey fatigue. Arguably, large parts of the online economy are propped up on the collection and reuse of personal data, which can then be used to target advertisements and reminders. Increasingly, you now have to pay to opt out of targeted ads on websites – assuming you can view the website at all without paying. After a period of abeyance, concerns around data privacy are beginning to reemerge. Forms of social media that rely on a constant drive to share personal information are unexpectedly beginning to struggle – for younger generations participatory social media is more likely to be a group chat or discord server, while formerly participatory services like YouTube and TikTok have become platforms for media consumption.

    In the world of public opinion research the struggle with response rates has partially been met via a switch from randomised phone or in-person to the use of pre-vetted online panels. This (as with the rise of focus groups) has generated a new cadre of “professional respondents” – with huge implications for the validity of polling even when weighting is applied.

    Governments and industry are moving towards administrative data – the most recognisable example in higher education being the LEO dataset of graduate salaries. But this brings problems in itself – LEO lets us know how much income graduates pay tax on from their main job, but deals poorly with the portfolio careers that are the expectation of many graduates. LEO never cut it as a policymaking tool precisely because of how broadbrush it is.

    In a world where everything is data driven, what happens when the quality of data drops? If we were ever making good, data-driven decisions, a problem with the raw material suggests a problem with the end product. There are methodological and statistical workarounds, but the trend appears to be shifting away from people being happy to give out personal information without compensation. User interaction data – the traces we create as we interact with everything from ecommerce to online learning – are for now unaffected, but are necessarily limited in scope and explanatory value.

    We’ve lived through a generation where data seemed unlimited. What tools do we need to survive a data dark age?

    Source link