Wires within wires – the hidden complexity of managing higher education assessment

Wires within wires – the hidden complexity of managing higher education assessment

Back in the day, assessment was a relatively self-contained process: students produce a thing, academics mark the thing, results get recorded in a spreadsheet and then published. These days, that’s an outdated mental model – and not just because of AI.

A recent conversation with members of the Academic Registrars’ Council Assessment Practitioner Group offers a guided tour of an extraordinarily intricate machine, one that most students – and probably quite a few staff, too – barely know is running. Exploring the mechanics of administering assessment in contemporary higher education sheds light on what on the face it might look like a “process” issue, but in reality takes in strategy, policy, pedagogy, technology, and institutional culture.

Digital first

The starting point is always disciplinary variation: unless you are literally a single-programme institution it is impossible to design a singular institutional system for managing assessment within a defined timeframe. Professional bodies have specific expectations and requirements, and different disciplines have different assessment cultures. Any assessment management process already has to accommodate significant disciplinary and programme differences in timing and mode of assessment.

By now, the journey towards digitisation of assessment is all but complete, unless you’re talking about a laboratory or clinical practical exam. Even traditional invigilated exams, where these exist, typically require digitisation at some point in their journey, or may be undertaken on locked-down devices. The end-to-end assessment “process” – from the moment an academic team designs an assessment, through submission, marking, moderation, ratification, and the eventual release of results into the student record system – has as a consequence become remarkably complicated, and involves a lot more people than it used to. Digital assessment – in both senses of assessments as digital rather than physical artefacts, and of the assessment management process being conducted primarily by digital means – requires a much greater degree of central input, via IT teams, academic registry and learning technologists.

Course administration teams are now routinely engaging with VLE systems that might previously have been managed by dedicated learning technology units. IT teams are supporting students through digital processes that touch every stage of the assessment lifecycle. The coordination is more complex, the linkages through systems – VLEs, student record systems, integration platforms – are more numerous, and the number of colleagues who need to understand and operate within those systems has grown. Technology offers the prospect of streamlining assessment and ensuring consistency and accuracy in marking and moderation. But the number of moving parts required to achieve that streamlining effect is significant.

Enter AI

AI has obviously thrown a major spanner into efforts to create a seamless digital “flow” for assessment. Many institutions had shifted away from in-person exams during the Covid-19 pandemic in favour of other forms of assessment. This shift tallied closely with ongoing efforts to develop more engaging, authentic forms of assessment, building in more choice for students, in more diverse modes of assessment, and ideally, reducing the overall number of assessments.

The immediate pressure to secure academic standards in the wake of the advent of generative AI pushed some back into the exam hall. Once you’re there, the logic of digital means it’s much more efficient (in one sense) if exams are undertaken digitally. But digital examinations also create additional complexity, requiring supported devices configured in a particular way, locked down browsers, technical support on the day, and invigilators who can troubleshoot not just exam room behaviour but also password lockouts and software glitches.

Outside the realm of the locked down assessment, there’s a recognition that a diet of exams alone isn’t going to serve student learning well and that as AI becomes ever more deeply embedded into knowledge work using AI judiciously and strategically will itself become part of the assessment and associated learning outcomes. There’s an open question for how long it makes sense to adopt a policy of “declare”. Thinking and practice is evolving rapidly; what was viewed as problematic yesterday might begin to feel OK by tomorrow. At some point, the argument goes, AI becomes part of the fabric of how work is done. But that “at some point” is doing a lot of heavy lifting, and given the pace of technological change higher education institutions are not in a position to declare a collective consensus on where the line falls.

Being reasonable

Less high profile but in some ways much more critical, is the increasing number of students who require reasonable adjustments in their assessments, or who are putting in requests for extenuating circumstances. This reflects the degree to which students are themselves juggling the complexities of disability, periods of ill health and responsibilities outside the classroom. “Inclusive by design” is the aspiration for most curricula but there will always be exceptions, and collective understanding of how students’ needs manifest in assessment settings is also changing rapidly, leading to a greater number of requests for flexibility than can easily be integrated into existing processes.

One member of the network observed that policies are typically written in a way that does not account for the likelihood of scale – they bake in an assumption that flexibility in assessment processes will be the exception rather than a norm. What would be a minor and reasonable administrative burden in the interests of a level playing field for a minority of students quickly becomes immense when the numbers increase. There’s a growing sense that something needs to change – not just in how these claims are processed, but in the underlying policy assumptions.

The overarching purpose of assessment regulations is to maintain and safeguard academic standards, but some of the traditions embedded in those regulations may be doing less to uphold standards than to create hurdles that students must clear, without any real benefit to academic integrity. As one member of the network suggested, seeing the experience from the student’s perspective helps to frame institutional policy as “supporting students through assessment rather than punishing students through assessment.” If, for example, one network member asked, referral marks were not capped, would there still be a need for the current extenuating circumstances infrastructure? It’s a thought experiment, not a policy recommendation, and there are arguments on both sides. But it illustrates a broader willingness to ask “first principles” questions about whether there are ways of being more supportive of students while still protecting standards.

Regulation may be a barrier to experimentation for some. When external regulation takes a hardline approach to standards, it can make institutions more cautious about the kind of innovative policy rethinking that could serve students better. Navigating that territory requires a careful balance between doing the right thing educationally and managing the risks of attracting unwelcome regulatory attention.

Efficiency is only one of the watchwords

As the sector deals with the implications of a shrinking resource base, it’s not surprising that academic registrars report feeling a pressure to streamline, seek efficiencies and demonstrate a process with as few administrative overheads as possible. While nobody disagrees in principle with the need for efficiency, it simply is not possible to talk about assessment without talking about systems integration, institutional policy and strategy, the student experience, the onward march of technology, the demands of professional bodies, and a funding model that leaves very little room for investment. Each of these factors connects to all the others.

In a policy landscape where there is much, sometimes glib, talk of efficiency and transformation, it is worth keeping in mind that the people who actually run the institutional processes are not dealing with lumbering bureaucracy. Instead, they are dealing with a dense and high stakes challenge that touches every part of the institution, and that is shaped by every part of the institution’s external environment. Managing it well is not about streamlining it into simplicity. It is about building the institutional capacity – in people, systems, policy, and culture – to hold that complexity intelligently.

This article is published as part of a partnership with UNIwise. Debbie would like to thank the members of the Academic Registrars’ Council Assessment Practitioner Group for their insight in developing this article. To inquire about joining the network contact group chair Rebecca Di Pancrazio, academic registrar at the University of Portsmouth.

Source link