Tag: dashboard

  • Road testing the new TEF data dashboard

    Road testing the new TEF data dashboard

    This blog was kindly authored by Professor Janice Kay CBE, Director, Higher Futures ([email protected])

    Overall verdict: Compared with the TEF 2023 Data Dashboard, the latest one handles more like the dash of a modern EV. Go steadily at first and have a play, safe in the knowledge that you aren’t going to unnecessarily damage the respectable vehicle you are looking at in the 360-degree camera. The new TEF Data Dashboard is a very powerful instrument and at first look it is clear and intuitive. But go more deeply into the data using the Filters for example, you will need to go from no experience needed to more expert skill.

    The fundamental of maintaining and improving the student experience is data. Data, in the form of statistically benchmarked indicators, inform understanding, and in the right hands, direct strategic delivery. Universities are keen on institutional big interventions, often running several at the same time, sometimes without clarity gained from what data are telling them. Often, these aren’t evaluated well, don’t work or run into the sand. Staff become cynical, students either unaware or confused. Benchmarked data help prioritisation and selection for effective delivery. Reliable Data Dashboards are essential.

    And, therefore, for those who love data and understanding competitor positions, the newly TEF Data Dashboard, launched this week, is essential to integrated quality and improvement.

    I tested it in TEF Panel member mode and looked at the data for a variety of Providers whose indicators and performance I had known in TEF 2023. This included universities (low to high tariff institutions), colleges (FEC, private providers) and specialists. I thought about it as I would if I were assessing the TEF performance of higher education provider across Student Experience and Outcomes. And from the perspective of a provider wishing to understand their data over the Time Series.

    For both functions, the improved power and handling are very welcome. Start as you would have done in the 2023 dashboard by choosing either Experience or Outcomes and you are presented with a series of deceptively simple tabs. The Overall Experience tab presents performance across basic sections of the National Student Survey by Measure (e.g. Teaching on My Course, Assessment and Feedback) and Mode (e.g. Full Time, Part-Time, Degree Apprenticeship).

    Gone are the complicated illustrations of overall distribution and variability, showing central tendencies alongside spread of results. Instead, data are simply numerical and colour coded. They reference statistical confidence of whether a result is materially above, broadly in line or materially below, benchmark, and it’s extremely easy to evaluate how an institution is doing overall. No judgment needed, the Dashboard does it for you. It will be extremely helpful for providers, reviewers and student representatives alike.

    It gets better. To really understand and to be able to improve performance, one of the most crucial elements is to know how consistent the overall findings are, by measure (NSS) and mode (FT, PT etc) and Time. Unless an institution can drill down to its Split benchmarked data – how provider subjects and student groups are doing over time – it’s impossible to get a grip on understanding what’s going well and what isn’t and to work out how to improve performance.

    The Split Consistency tab provides you with that information at a glance. Still welcome is the Partnership Split which gives clear guidance about performance of partnership students, with franchised degrees, for example.

    The Split overview display focuses on the performance of the Splits, making it easier to see whether and by how much data are inconsistent. Take Teaching on my Course for a particular provider as an example. Imagine it is overall Broadly in Line with benchmark (marked with a black circle). How much are subgroups consistent with this performance? The display gives you an immediate answer: consistent with the overall pattern (broadly in line in this case) will be in a blank cell, inconsistent will be in other cells, above or below benchmark. It is therefore easy to see whether performance of full-time students across different subgroups is consistent or inconsistent for Teaching on my Course.

    Of course, this was more than possible to do in previous dashboards, but it required manual search and some degree of judgement – was there an inconsistent pattern and did appear to be material? The Split Consistency tab does the work for you.

    The next tab gives you the Detailed View through which you can explore overall NSS data and Splits in even more depth. It is at this point when Dashboard use requires a bit more expertise to use. Filters are available to select and search in more detail, allowing drill down to understand inconsistent performance (e.g. across full-time and part-time, or degree apprenticeships, by subject and by student group).

    Back to the Overview page and Outcomes. You are now in the territory of Continuation, Completion and Progression, benchmarked and presented across modes of full-time, part-time, degree apprenticeships. Again Indicators (e.g. % Positive) are presented much more clearly than in the previous dashboard, and statistical confidence about materiality of difference against benchmark is given in percentage terms and colour coded. The data are also presented in a Split Consistency tab and in Detailed View, including partnership information. Whether performance is probably below B3 Threshold is usefully colour coded, and information includes that about Graduate Outcomes quintile.

    The B3 Thresholds tab will be invaluable for Provider planners, giving an immediate view of whether performance is in line with, above or below B3 thresholds, colour coded, for Continuation, Completion and Progression. Data are there by mode, level and splits: time series, taught by provider, course type, subject and student groups.

    One useful element of the 2023 TEF Data Dashboard was the ability to search (albeit manually) whether performance of an individual student group or an individual subject area is materially inconsistent across different categories of the NSS or over time: Is the performance of your Business course materially below benchmark across the various sections of the NSS, or over time, or for particular student groups. This information can be found through filtering in Detailed View – easy-peasy when you construct the right filters but requiring careful thought.

    Interpreting data does require planning analytics resource and expertise. Carried out well the new dashboard will be invaluable in helping to better understand and crystallise the areas that are doing well and those that need real attention to improve.

    I was concerned at the disappearance at the turn of the year of the 2024 TEF Data Dashboard. No longer would you be able to look across 2023 and 2024 to chart performance, assess trajectory and construct your narrative. However, the latest TEF Data Dashboard gives you the right time series to track whether your metrics journey is improving or worsening, to tell your story, so 2024 is arguably no longer needed.

    A minor gripe: Perhaps replace Red /Green colour coding with the yellow, blue option for those who are red/green insensitive? And, substantively, it would be useful to have an Overview tab that summates Experience and Outcomes.

    In conclusion: the new TEF Data Dashboard is a great innovation. It’s easy to use, even fun (for geeks like me), absorbing and intuitive in parts. It’s fully functional, ensuring initial use is not hampered by lack of expertise. But using Filters and Splits will require some practice and training. This data-driven improvement Dashboard is a great rollout.

    This blog was originally published on 23rd February and has since been updated.

    Source link

  • Quality assurance behind the dashboard

    Quality assurance behind the dashboard

    The depressing thing about the contemporary debate on the quality of higher education in England is how limited it is.

    From the outside, everything is about structures, systems, and enforcement: the regulator will root out “poor quality courses” (using data of some sort), students have access to an ombuds-style service in the Office for the Independent Adjudicator, the B3 and TEF arrangements mean that regulatory action will be taken. And so on.

    The proposal on the table from the Office for Students at the moment doubles down on a bunch of lagging metrics (continuation, completion, progression) and one limited lagging measure of student satisfaction (NSS) underpinning a metastasised TEF that will direct plaudits or deploy increasingly painful interventions based on a single precious-metal scale.

    All of these sound impressive, and may give your academic registrar sleepless nights – but none of them offer meaningful and timely redress to the student who has turned up for a 9am lecture to find that nobody has turned up to deliver it – again. Which is surely the point.

    It is occasionally useful to remember how little this kind of visible sector level quality assurance systems have to do with actual quality assurance as experienced by students and others, so let’s look at how things currently work and break it down by need state.

    I’m a student and I’m having a bad time right now

    Continuation data and progression data published in 2025 reflects the experience of students who graduated between 2019 and 2022; completion data refers to cohorts between 2016 and 2019; the NSS reflects the opinions of final year students and is published the summer after they graduate. None of these contain any information about what is happening in labs, lecture theatres, and seminar rooms right now.

    As students who have a bad experience in higher education don’t generally get the chance to try it again, any useful system of quality assurance needs to be able to help students in the moment – and the only realistic way that this can happen is via processes within a provider.

    From the perspective of the student the most common of these are module feedback (the surveys conducted at the end of each unit of teaching) and the work of the student representative (a peer with the ability to feedback on behalf of students). Beyond this students have the ability to make internal complaints, ranging from a quiet word with the lecturer after the seminar to a formal process with support from the Students’ Union.

    While little national attention has been paid in recent years to these systems and pathways they represent pretty much the only chance that an issue students are currently facing can be addressed before it becomes permanent.

    The question needs to be whether students are aware of these routes and feel confident in using them – it’s fair to say that experience is mixed across the sector. Some providers are very responsive to the student voice, others may not be as quick or as effective as they should be. Our only measure of these things is via the National Student Survey – about 80 per cent of the students in the 2025 cohort agree that students’ opinions about their course are valued by staff, while a little over two-thirds agree that it is clear that student feedback is acted upon.

    Both these are up on equivalent questions about five years ago, suggesting a slow improvement in such work, but there is scope for such systems to be reviewed and promoted nationally – everything else is just a way for students to possibly seek redress long after anything could be done about it.

    I’m a graduate and I don’t know what my degree is worth/ I’m an employer and I need graduate skills

    The value of a degree is multifaceted – and links as much to the reputation of a provider or course as to the hard work of a student.

    On the former much the heavy lifting is done by the way the design of a course conforms to recognised standards. For more vocational courses, these are likely to have been set by professional, statutory, and regulatory bodies (PSRBs) – independent bodies who set requirements (with varying degrees of specificity) around what should be taught on a course and what a graduate should be capable of doing or understanding.

    Where no PSRB exists, course designers are likely to map to the QAA Subject Benchmarks, or to draw on external perspectives from academics in other universities. As links between universities and local employment needs solidify, the requirements set by local skills improvement plans (LSIPs) will play a growing part – and it is very likely that these will be mapped to the UK Standard Skills Classification descriptors.

    The academic standing of a provider is nominally administered by the regulator – in England the Office for Students has power to deregister a provider where there are concerns, making it ineligible for state funding and sparking a media firestorm that will likely torch any remaining residual esteem. Events like this are rare – standards are generally maintained via a semi-formal system of cross-provider benchmarking and external examination, leavened by the occasional action of whistleblowers.

    That’s also a pretty good description about how we assure that the mark a graduate awarded makes sense when compared to the marks awarded to other graduates. External examiners here play a role in ensuring that standards are consistent within a subject, albeit usually at module rather than course level; it’s another system that has been allowed (and indeed actively encouraged) to atrophy, but it still remains the only way of doing this stuff in anything approaching real time.

    I’m an international partner and I can’t be sure that these qualifications align with what we do

    Collaborating internationally, or even studying internationally, often requires some very specific statements around the quality of provision. One popular route to doing this is being able to assert that your provider meets well-understood international standards – the ESG (standards and guidelines for quality assurance in the European Higher Education Area) represent probably the most common example.

    Importantly, the ESG does not set standards about teaching and learning, or awarding qualifications – it sets standards for the way institutional quality assurance processes are assessed by national bodies. If you think that this is incredibly arm’s length you would be right, but it is also the only way of ensuring that the bits of quality assurance that interface with the student experience in near-real-time actually work.

    I am an academic and I want to design courses and teach students in ways that help students to succeed

    Quality enhancement – beyond compliance with academic standards – is about supporting academic staff in making changes to teaching and learning practice (how lectures are delivered, how assessments are designed, how individual support is offered). It is often seen as an add-on, but should really be seen as a core component of any system of quality assurance. Indeed, in Scotland, regulatory quality assurance in the form of the Tertiary Quality Enhancement Framework starts from the premise that tertiary provision needs to be “high quality” and “improving”.

    Outside of Scotland the vestiges of a previous UK wide approach to quality enhancement exists in the form of AdvanceHE. Many academic staff will first encounter the principles and practice of teaching quality enhancement via developing a portfolio to submit for fellowship – increasingly a prerequisite for academic promotions. AdvanceHE also supports standards which are designed to underpin training in teaching for new academic staff, and support networks. The era of institutional “learning and teaching offices” (another vestige of a previous government-sponsored measure to support enhancement) is mostly over, but many providers have networks of staff with an interest in the practice of teaching in higher education.

    So what does the OfS actually do?

    In England, the Office for Students operates a deficit model of quality assurance. It assumes that, unless there is some evidence to the contrary, an institution is delivering higher education at an appropriate level of quality. Where the evidence exists for poor performance, the regulator will intervene directly. This is the basis of a “risk based” approach to quality assurance, where more effort can be expended in areas of concern and less burden placed on providers.

    For a system like this to work in a way that addresses any of the needs detailed above, OfS would need far more, and more detailed, information on where things are going wrong as soon as they happen. It would need to be bold in acting quickly, often based on incomplete or emerging evidence. Thus far, OfS has been notably adverse to legal risk (having had its fingers burned by the Bloomsbury case), and has failed (despite a sustained attempt in the much-maligned Data Futures) to meaningfully modernise the process of data collection and analysis.

    It would be simpler and cheaper for OfS to support and develop institutions’ own mechanisms to support quality and academic standards – an approach that would allow for student issues to be dealt with quickly and effectively at that level. A stumbling block here would be the diversity of the sector, with the unique forms and small scale of some providers making it difficult to design any form of standardisation into these systems. The regulator itself, or another body such as the Office for the Independent Adjudicator (as happens now), would act as a backstop for instances where these processes do not produce satisfactory results.

    The budget of the Office for Students has grown far beyond the ability of the sector to support it (as was originally intended) via subscription. It receives more than £10m a year from the Department for Education to cover its current level of activity – it feels unlikely that more funds will arrive from either source to enable it to quality assure 420 providers directly.

    All of this would be moot if there were no current concerns about quality and standards. And there are many – stemming both from corners being cut (and systems being run beyond capacity) due to financial pressures, and from a failure to regulate in a way that grows and assures a provider’s own capacity to manage quality and standards. We’ve seen evidence from the regulator itself that the combination of financial and regulatory failures has led to many examples of quality and standards problems: course and modules closed without suitable alternatives for students, difficulties faced by students in accessing staff and facilities due to overcrowding or underprovision, and concerns about an upward pressure on marks from a need to bolster continuation and completion rates.

    The route through the current crisis needs to be through improvement in providers’ own processes, and that would take something that the OfS has not historically offered the sector: trust.

    Source link