The AI arms race still rages. Students will identify AI writing support tools, educators will rearm themselves with AI-aware plagiarism-detection software, and students will source apps that can bypass the detection software. Institutions are increasingly prioritising the ease with which mass assessments can be marked. Governments are revising legislation that banned ‘essay mills’ used for contract cheating to incorporate restrictions on Generative AI.
Students may find themselves asked to handwrite their submissions to avoid the temptation to use fully integrated generative AI tools in their word processing software. Some shrewd students will (re)discover software that takes your typed text, AI-generated or otherwise, and turn it into a version that mimics your handwriting (calligraphr.com).
Many institutions have already been thinking about this for years. In 2021, UNESCO issued a useful report, ‘AI and Education’, which remains a foundational reader for any institutional leader who wants to be able to go head-to-head with their head of technology services, and to be informed when members of the Senate repeat some of the dystopian viewpoints gleaned from their social media feeds.
What we need is a revolution in the design of university assessments. This also means some radical redesign of programs and courses. Institutions should be redefining what assessment looks like. Not just because much of the assessments currently on offer lend themselves too easily to plagiarism, contract-cheating, or AI-generated responses, but also because they are bad assessments. Bad assessments, designed loosely to assess very badly written learning outcomes.
Many universities face a fundamental problem: their entire assessment philosophy (if they have one) remains rooted in a measuring psychosis. One that sees its self-justification in measuring what the learner knows now, rather than what they could do before they undertook a specific course or degree, and how much they have improved. Each course is assessed against its own learning outcomes (where these exist and assuming they are actually well-formed). The odds are that these outcomes are heavily weighted towards the cognitive outcomes and have not moved beyond Bloom’s standard pyramid.
Rarely are these course-level outcomes accurately mapped and weighted against programme outcomes. A student should always be able to match the assessment they are asked to complete against a set of skills expected as the outcome for a specific course. These skills need to be clearly mapped onto programme outcomes. Each assessment task is assessed against some formulation of marking rubrics or guides, often with multiple markers making controlled, monitored judgements to attempt to ensure just (not standardised) marks.
Unfortunately, it remains common to see all of these cohort-marked assessments plotted against a bell curve, and top marks to be ‘brought back into line’ where convention dictates.
Why? Surely the purpose of undertaking a university degree is self-improvement. There is a minimum threshold that I must meet, a pass mark, that allows me to demonstrate that I am capable of certain things, certain abilities or skills. But beyond that? If I got a second-class honours degree and my friend got a first, does that mean they knowmore than I do? Currently, given the emphasis on cognitive skills and knowledge, one can fairly say yes. Does it mean they are necessarily more proficient out there in the big, wide world? Probably not. We are simply not assessing the skills and abilities that most graduates need.
I advocate for Universities to abandon isolated course-specific assessments in favour of programme-wide portfolio assessments. These are necessarily ipsative, capturing students’ disparate strengths and weaknesses relative to their own performance over time. There may be pass/fail assessments as part of any portfolio, but there are also opportunities for annual or thematic synoptic assessments. Students would be encouraged to draw on their contributions to the university drama club, the volleyball team, or their part-time work outside the university.
I undertook a short consultancy last year for a university that has been a bit freaked out by the advent of Generative AI. The head of department had a moment of realisation that the vast majority of the degree assessment was based entirely on knowledge recovery and transmission. In reality, of course, their assessment strategy has been flawed long before the advent of ChatGPT. They’ve struggled with plagiarism detection, itself imperfect, obviously, and with reproducing answers that differ only at the margins between students.
The existing assessment certainly made it easier for them to have external markers looking for specific words to match a pro forma answer. No educational developer worth their salt would have looked at this particular assessment strategy and thought it was in any way valid. The perceived threat to assessment integrity does offer an opportunity for those who are still naive enough to think that essay questions demonstrate anything other than the ability to regurgitate existing knowledge and, at its best, an ability to write in a compelling way. Unless such writing is a skill that is required of the programme of study, it’s a fairly pointless exercise.
Confidentiality means I don’t wish to identify the organisation, let alone the department, in question. What became abundantly clear is that the assessment strategy had been devised as the programme grew. As they increased the number of students, they had contracted out significant amounts of the marking. This led to a degree of removal from individual students’ actual experiences.
Surely one can see that it will become pointless to ask students to answer knowledge-based questions beyond a diagnostic exercise early in each course or programme.
So what’s the alternative? With very rare exceptions, the vast majority of tertiary students will have lived for at least 18 years. They have life experiences that make their perspectives different from those of their fellow students. Suppose we can design our assessments around individuals’ personal epistemology, culture, and experience. We have a chance to differentiate between them. We can build assessment incrementally within specific courses and programmes. Each course in a programme can build on previous courses. In the case of this particular client, I suggested that eliminating as many electives as possible and narrowing the options would not deter applicants and would make the design of assessment strategies within the programme more coherent.
Developing a personal portfolio of evidence throughout a programme of study gives students both a sense of ownership over their own learning and potentially a resource they will continue to augment once they graduate. The intention is to develop an incremental assessment approach. Students in the third year of studies would be asked to review coursework from previous years, for example. Students could be asked to comment and provide feedback on students in earlier years within the same programme. Blending the ipsative nature of assessments with credit-bearing assessment tasks is the crucial skill now required of learning designers.
Maybe it is now a good time for you to review your learning outcomes and ask whether you are assessing skills and attributes?
Paid subscribers will have access to assessment design tools



















