This blog was kindly authored by Professor Janice Kay CBE, Director, Higher Futures ([email protected])
Overall verdict: Compared with the TEF 2023 Data Dashboard, the latest one handles more like the dash of a modern EV. Go steadily at first and have a play, safe in the knowledge that you aren’t going to unnecessarily damage the respectable vehicle you are looking at in the 360-degree camera. The new TEF Data Dashboard is a very powerful instrument and at first look it is clear and intuitive. But go more deeply into the data using the Filters for example, you will need to go from no experience needed to more expert skill.
The fundamental of maintaining and improving the student experience is data. Data, in the form of statistically benchmarked indicators, inform understanding, and in the right hands, direct strategic delivery. Universities are keen on institutional big interventions, often running several at the same time, sometimes without clarity gained from what data are telling them. Often, these aren’t evaluated well, don’t work or run into the sand. Staff become cynical, students either unaware or confused. Benchmarked data help prioritisation and selection for effective delivery. Reliable Data Dashboards are essential.
And, therefore, for those who love data and understanding competitor positions, the newly TEF Data Dashboard, launched this week, is essential to integrated quality and improvement.
I tested it in TEF Panel member mode and looked at the data for a variety of Providers whose indicators and performance I had known in TEF 2023. This included universities (low to high tariff institutions), colleges (FEC, private providers) and specialists. I thought about it as I would if I were assessing the TEF performance of higher education provider across Student Experience and Outcomes. And from the perspective of a provider wishing to understand their data over the Time Series.
For both functions, the improved power and handling are very welcome. Start as you would have done in the 2023 dashboard by choosing either Experience or Outcomes and you are presented with a series of deceptively simple tabs. The Overall Experience tab presents performance across basic sections of the National Student Survey by Measure (e.g. Teaching on My Course, Assessment and Feedback) and Mode (e.g. Full Time, Part-Time, Degree Apprenticeship).
Gone are the complicated illustrations of overall distribution and variability, showing central tendencies alongside spread of results. Instead, data are simply numerical and colour coded. They reference statistical confidence of whether a result is materially above, broadly in line or materially below, benchmark, and it’s extremely easy to evaluate how an institution is doing overall. No judgment needed, the Dashboard does it for you. It will be extremely helpful for providers, reviewers and student representatives alike.
It gets better. To really understand and to be able to improve performance, one of the most crucial elements is to know how consistent the overall findings are, by measure (NSS) and mode (FT, PT etc) and Time. Unless an institution can drill down to its Split benchmarked data – how provider subjects and student groups are doing over time – it’s impossible to get a grip on understanding what’s going well and what isn’t and to work out how to improve performance.
The Split Consistency tab provides you with that information at a glance. Still welcome is the Partnership Split which gives clear guidance about performance of partnership students, with franchised degrees, for example.
The Split overview display focuses on the performance of the Splits, making it easier to see whether and by how much data are inconsistent. Take Teaching on my Course for a particular provider as an example. Imagine it is overall Broadly in Line with benchmark (marked with a black circle). How much are subgroups consistent with this performance? The display gives you an immediate answer: consistent with the overall pattern (broadly in line in this case) will be in a blank cell, inconsistent will be in other cells, above or below benchmark. It is therefore easy to see whether performance of full-time students across different subgroups is consistent or inconsistent for Teaching on my Course.
Of course, this was more than possible to do in previous dashboards, but it required manual search and some degree of judgement – was there an inconsistent pattern and did appear to be material? The Split Consistency tab does the work for you.
The next tab gives you the Detailed View through which you can explore overall NSS data and Splits in even more depth. It is at this point when Dashboard use requires a bit more expertise to use. Filters are available to select and search in more detail, allowing drill down to understand inconsistent performance (e.g. across full-time and part-time, or degree apprenticeships, by subject and by student group).
Back to the Overview page and Outcomes. You are now in the territory of Continuation, Completion and Progression, benchmarked and presented across modes of full-time, part-time, degree apprenticeships. Again Indicators (e.g. % Positive) are presented much more clearly than in the previous dashboard, and statistical confidence about materiality of difference against benchmark is given in percentage terms and colour coded. The data are also presented in a Split Consistency tab and in Detailed View, including partnership information. Whether performance is probably below B3 Threshold is usefully colour coded, and information includes that about Graduate Outcomes quintile.
The B3 Thresholds tab will be invaluable for Provider planners, giving an immediate view of whether performance is in line with, above or below B3 thresholds, colour coded, for Continuation, Completion and Progression. Data are there by mode, level and splits: time series, taught by provider, course type, subject and student groups.
One useful element of the 2023 TEF Data Dashboard was the ability to search (albeit manually) whether performance of an individual student group or an individual subject area is materially inconsistent across different categories of the NSS or over time: Is the performance of your Business course materially below benchmark across the various sections of the NSS, or over time, or for particular student groups. This information can be found through filtering in Detailed View – easy-peasy when you construct the right filters but requiring careful thought.
Interpreting data does require planning analytics resource and expertise. Carried out well the new dashboard will be invaluable in helping to better understand and crystallise the areas that are doing well and those that need real attention to improve.
I was concerned at the disappearance at the turn of the year of the 2024 TEF Data Dashboard. No longer would you be able to look across 2023 and 2024 to chart performance, assess trajectory and construct your narrative. However, the latest TEF Data Dashboard gives you the right time series to track whether your metrics journey is improving or worsening, to tell your story, so 2024 is arguably no longer needed.
A minor gripe: Perhaps replace Red /Green colour coding with the yellow, blue option for those who are red/green insensitive? And, substantively, it would be useful to have an Overview tab that summates Experience and Outcomes.
In conclusion: the new TEF Data Dashboard is a great innovation. It’s easy to use, even fun (for geeks like me), absorbing and intuitive in parts. It’s fully functional, ensuring initial use is not hampered by lack of expertise. But using Filters and Splits will require some practice and training. This data-driven improvement Dashboard is a great rollout.
This blog was originally published on 23rd February and has since been updated.


