Tag: Assessment

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • Mental health screeners help ID hidden needs, research finds

    Mental health screeners help ID hidden needs, research finds

    Key points:

    A new DESSA screener to be released for the Fall ‘25 school year–designed to be paired with a strength-based student self-report assessment–accurately predicted well-being levels in 70 percent of students, a study finds.  

    According to findings from Riverside Insights, creator of research-backed assessments, researchers found that even students with strong social-emotional skills often struggle with significant mental health concerns, challenging the assumption that resilience alone indicates student well-being. The study, which examined outcomes in 254 middle school students across the United States, suggests that combining risk and resilience screening can enable identification of students who would otherwise be missed by traditional approaches. 

    “This research validates what school mental health professionals have been telling us for years–that traditional screening approaches miss too many students,” said Dr. Evelyn Johnson, VP of Research & Development at Riverside Insights. “When educators and counselors can utilize a dual approach to identify risk factors, they can pinpoint concerns and engage earlier, in and in a targeted way, before concerns become major crises.”

    The study, which offered evidence of, for example, social skills deficits among students with no identifiable or emotional behavioral concerns, provides the first empirical evidence that consideration of both risk and resilience can enhance the predictive benefits of screening, when compared to  strengths-based screening alone.

    In the years following COVID, many educators noted a feeling that something was “off” with students, despite DESSA assessments indicating that things were fine.

    “We heard this feedback from lots of different customers, and it really got our team thinking–we’re clearly missing something, even though the assessment of social-emotional skills is critically important and there’s evidence to show the links to better academic outcomes and better emotional well-being outcomes,” Johnson said. “And yet, we’re not tapping something that needs to be tapped.”

    For a long time, if a person displayed no outward or obvious mental health struggles, they were thought to be mentally healthy. In investigating the various theories and frameworks guiding mental health issues, Riverside Insight’s team dug into Dr. Shannon Suldo‘s work, which centers around the dual factor model.

    “What the dual factor approach really suggests is that the absence of problems is not necessarily equivalent to good mental health–there really are these two factors, dual factors, we talk about them in terms of risk and resilience–that really give you a much more complete picture of how a student is doing,” Johnson said.

    “The efficacy associated with this dual-factor approach is encouraging, and has big implications for practitioners struggling to identify risk with limited resources,” said Jim Bowler, general manager of the Classroom Division at Riverside Insights. “Schools told us they needed a way to identify students who might be struggling beneath the surface. The DESSA SEIR ensures no student falls through the cracks by providing the complete picture educators need for truly preventive mental health support.”

    The launch comes as mental health concerns among students reach crisis levels. More than 1 in 5 students considered attempting suicide in 2023, while 60 percent of youth with major depression receive no mental health treatment. With school psychologist-to-student ratios at 1:1065 (recommended 1:500) and counselor ratios at 1:376 (recommended 1:250), schools need preventive solutions that work within existing resources.

    The DESSA SEIR will be available for the 2025-2026 school year.

    This press release originally appeared online.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • Transforming higher education learning, assessment and engagement in the AI revolution: the how

    Transforming higher education learning, assessment and engagement in the AI revolution: the how

    • By Derfel Owen, London School of Hygiene & Tropical Medicine and Janice Kay, Higher Futures.

    Generative AI and other new technologies create unprecedented challenges to some of the deepest and longest-held assumptions about how we educate and support students. We start from a position that rejects a defensive stance, attempting to protect current practice from the perceived threat of AI. Bans, restrictions and policies to limit AI use have emerged in an effort to uphold existing norms. Such approaches risk isolating and alienating students who are using AI anyway and will fail to address its broader implications. The point is that AI forces us to reconsider and recapitulate current ways of how we teach, how we help students to learn, how we assess and how we engage and support.  Four areas of how we educate require a greater focus:

    • Critical Thinking and Problem-Solving: Teaching students to evaluate, analyse, and synthesise information while questioning AI-generated outputs.
    • Creativity and Innovation: Focusing on nurturing original ideas, divergent thinking, and the ability to combine concepts in novel ways.
    • Emotional Intelligence: Prioritising skills like empathy, communication, and collaboration,  essential for leadership, teamwork, and human connection.
    • Ethical Reasoning: Training students to navigate ethical dilemmas and critically evaluate the ethical implications of AI use in society.

    Here we set out some practical steps that can be taken to shift us in that direction.

    1. Emphasise Lifelong Learning and Entrepreneurialism

    Education should equip students with the ability to adapt throughout their lives to rapidly evolving technologies, professions and industries. Fostering the ability to learn, unlearn, and relearn quickly in response to changing demands is essential. A well-rounded education will combine new and established knowledge across subjects and disciplines, building in an assumption that progress is made through interdisciplinary connections and creating space to explore the unknown, what we might not know yet and how we go about finding it.

    The transformation of traditional work through AI and automation necessitates that students are fully equipped to thrive in flexible and diverse job markets. Entrepreneurial thinking should be nurtured by teaching students to identify problems, design innovative solutions, and create value in ways that AI can support but not replicate. Leadership development should focus on fostering decision-making, adaptability, and team-building skills, emphasising the inherently human aspects of leadership.

    We should be aware that jobs and job skills in an AI world are evolving faster than our curricula. As McKinsey estimates, AI will transform or replace up to 800 million jobs globally, and the stakes are too high for incremental change.

    2. Promote Originality and Rigour though Collaboration

    AI’s strength lies in the processing speed and the sheer breadth of existing data and knowledge that it can access. It can tell you at exceptional pace what might have taken hours, days or weeks to discover. This should be viewed as a way to augment human capabilities and not as a crutch. Incorporating project-based, collaborative learning with AI will empower students to collaborate to create, solve problems, and innovate while reinforcing their roles as innovators and decision-makers. Working together should be a means of fostering communication skills, but can also be strengthened to encourage, promote and reward creativity and divergent thinking that goes further than conventional knowledge. Students should be encouraged to pursue discovery through critical thinking and verification, exploring unique, self-designed research questions or projects that demand deep thought and personal engagement. These steps will build digital confidence, ensuring students can use AI with confidence and assuredness, are able to test and understand its limitations and can leverage it as a tool to accelerate and underpin their innovation. Examples include generating content for campaigns or portfolio outputs, using AI to synthesise original data, demonstrating Socratic dialogue with AI and its outputs, challenging and critiquing prompts.

    3. Redesign Assessments

    Traditional assessments, such as essays and multiple-choice tests, are increasingly vulnerable to AI interference, and the value they add is increasingly questionable. To counter this, education should focus on performance-based assessments, such as presentations, debates, and real-time problem-solving, which showcase students’ ability to think critically and adapt quickly. Educators have moved away from such assessment methods in recent years because evidence suggests that biases creep into oral examinations. This needs reevaluating to judge the balance of risk in light of AI advancements. Stereotyping and halo biases can be mitigated and can increase student engagement with the assessment and subject matter. What is the greater risk? Biases in oral assessment? Or generating cohorts of graduates with skills to complete unseen, closed-book exams that are likely to be of limited value in a world in which deep and complex information and instruction can be accessed in a fraction of the time through AI? We must revisit these norms and assumptions.

    Collaborative assessments should also be prioritised, using group projects that emphasise teamwork, negotiation, and interpersonal skills. Furthermore, process-oriented evaluation methods should be implemented to assess the learning process itself, including drafts, reflections, and iterative improvements, rather than solely the final outputs. Authenticity in learning outputs can be assured through reflective practices such as journals, portfolios, and presentations that require self-expression and cannot be easily replicated by AI, especially when accompanied by opportunities for students to explain their journey and how their knowledge and approach to a topic have evolved as they learn.

    Achieving such radical change will require a dramatic scaling back of the arms race in assessment, dramatic reductions in multiple, modularised snapshot assessments. Shifting the assessment workload for staff and students is required, toward formative and more authentic assessments with in-built points of reflection. Mitigating more labour-intensive assessments, programme-wide assessment should be considered.

    4. Encourage understanding of the impact of AI on society, resilience and adaptability

    AI will accentuate the societal impact of and concerns about issues such as bias, privacy, and accountability. Utilising AI in teaching and assessment must build an expectation that students and graduates have an enquiring and sceptical mindsets, ready to seek further validation and assurance about facts as they are presented and how they were reached, what data was accessed and how; students need to be prepared and ready to unlearn and rebuild. This will require resilience and the ability to cope with failure, uncertainty, and ambiguity. A growth mindset, valuing continuous learning over static achievement, will help by enhancing their ability to adapt to evolving circumstances. Simulated scenario planning for real-world application of learning will help equip students with the skills to navigate AI-disrupted workplaces and industries successfully.

    The new kid on the block, DeepSeek, has the important feature that it is an open-source reasoning model, low cost (appearing to beat OpenAI o1 that is neither open-source nor free) with the benefit that it sets out its ‘thinking’ step-by-step, helpful for learning and demonstrating learning. It is not, however, able to access external reports critical of the Chinese state, de facto showing that Gen AI models are wholly dependent on the large language data on which they are trained. Students need fully to understand this and its implications.

    Navigating these wide-ranging challenges demands robust support for those shaping the student experience—educators, mentors, and assessors. They remain the heart of higher learning, guiding students through an era of unprecedented change. Yet, bridging the gap between established and emerging practices requires more than just adaptation; it calls for a transformation in how we approach learning itself. To thrive in an AI-integrated future, educators must not only enhance their own AI literacy but also foster open, critical dialogues about its ethical and practical dimensions. In this evolving landscape, everyone—students and educators alike—must embrace a shared journey of learning. The traditional role of the academic as the sole expert must give way to a more collaborative, inquiry-driven model. Only by reimagining the way we teach and learn can we ensure that AI serves as a tool for empowerment rather than a force for division.

    Source link

  • Centralising assessment doesn’t mean standardising pedagogy: Opinion – Campus Review

    Centralising assessment doesn’t mean standardising pedagogy: Opinion – Campus Review

    On CampusTechnology

    Adopting this approach has to be flexible and take into account different modalities used to assess students’ work, according to Piero Tintori

    Most universities dream of a future that embraces digital assessment and exams, but the journey to get there is complex and not universally supported.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Student-created book reviews inspire a global reading culture

    Student-created book reviews inspire a global reading culture

    Key points:

    When students become literacy influencers, reading transforms from a classroom task into a global conversation.

    When teens take the mic

    Recent studies show that reading for pleasure among teens is at an all-time low. According to the National Assessment of Educational Progress (NAEP), only 14 percent of U.S. students read for fun almost every day–down from 31 percent in 1984. In the UK, the National Literacy Trust reports that just 28 percent of children aged 8 to 18 said they enjoyed reading in their free time in 2023.

    With reading engagement in crisis, one group of teens decided to flip the narrative–by turning on their cameras. What began as a simple classroom project to encourage reading evolved into a movement that amplified student voices, built confidence, and connected learners across cultures.

    Rather than writing traditional essays or book reports, my students were invited to create short video book reviews of their favorite titles–books they genuinely loved, connected with, and wanted others to discover. The goal? To promote reading in the classroom and beyond. The result? A library of student-led recommendations that brought books–and readers–to life.

    Project overview: Reading, recording, and reaching the world

    As an ESL teacher, I’ve always looked for ways to make literacy feel meaningful and empowering, especially for students navigating a new language and culture. This video review project began with a simple idea: Let students choose a book they love, and instead of writing about it, speak about it. The assignment? Create a short, personal, and authentic video to recommend the book to classmates–and potentially, to viewers around the world.

    Students were given creative freedom to shape their presentations. Some used editing apps like Filmora9 or Canva, while others recorded in one take on a smartphone. I offered a basic outline–include the book’s title and author, explain why you loved it, and share who you’d recommend it to–but left room for personal flair.

    What surprised me most was how seriously students took the project. They weren’t just completing an assignment–they were crafting their voices, practicing communication skills, and taking pride in their ability to share something they loved in a second language.

    Student spotlights: Book reviews with heart, voice, and vision

    Each student’s video became more than a book recommendation–it was an expression of identity, creativity, and confidence. With a camera as their platform, they explored their favorite books and communicated their insights in authentic, impactful ways.

    Mariam ElZeftawy: The Fault in Our Stars by John Green
    Watch Miriam’s Video Review

    Mariam led the way with a polished and emotionally resonant video review of John Green’s The Fault in Our Stars. Using Filmora9, she edited her video to flow smoothly while keeping the focus on her heartfelt reflections. Mariam spoke with sincerity about the novel’s themes: love, illness, and the fragility of life. She communicated them in a way that was both thoughtful and relatable. Her work demonstrated not only strong literacy skills but also digital fluency and a growing sense of self-expression.

    Dana: Dear Tia by Maria Zaki
    Watch Dana’s Video Review

    In one of the most touching video reviews, Dana, a student who openly admits she’s not an avid reader, chose to spotlight “Dear Tia,” written by Maria Zaki, her best friend’s sister. The personal connection to the author didn’t just make her feel seen; it made the book feel more real, more urgent, and worth talking about. Dana’s honest reflection and warm delivery highlight how personal ties to literature can spark unexpected enthusiasm.

    Farah Badawi: Utopia by Ahmed Khaled Towfik
    Watch Farah’s Video Review

    Farah’s confident presentation introduced her classmates to Utopia, a dystopian novel by Egyptian author Ahmed Khaled Towfik. Through her review, she brought attention to Arabic literature, offering a perspective that is often underrepresented in classrooms. Farah’s choice reflected pride in her cultural identity, and her delivery was clear, persuasive, and engaging. Her video became more than a review–it was a form of cultural storytelling that invited her peers to expand their literary horizons.

    Rita Tamer: Frostblood
    Watch Rita’s Video Review

    Rita’s review of Frostblood, a fantasy novel by Elly Blake, stood out for its passionate tone and concise storytelling. She broke down the plot with clarity, highlighting the emotional journey of the protagonist while reflecting on themes like power, resilience, and identity. Rita’s straightforward approach and evident enthusiasm created a strong peer-to-peer connection, showing how even a simple, sincere review can spark curiosity and excitement about reading.

    Literacy skills in action

    Behind each of these videos lies a powerful range of literacy development. Students weren’t just reviewing books–they were analyzing themes, synthesizing ideas, making connections, and articulating their thoughts for an audience. By preparing for their recordings, students learned how to organize their ideas, revise their messages for clarity, and reflect on what made a story impactful to them personally.

    Speaking to a camera also encouraged students to practice intonation, pacing, and expression–key skills in both oral language development and public speaking. In multilingual classrooms, these skills are often overlooked in favor of silent writing tasks. But in this project, English Learners were front and center, using their voices–literally and figuratively–to take ownership of language in a way that felt authentic and empowering.

    Moreover, the integration of video tools meant students had to think critically about how they presented information visually. From editing with apps like Filmora9 to choosing appropriate backgrounds, they were not just absorbing content, they were producing and publishing it, embracing their role as creators in a digital world.

    Tips for teachers: Bringing book reviews to life

    This project was simple to implement and required little more than student creativity and access to a recording device. Here are a few tips for educators who want to try something similar:

    • Let students choose their own books: Engagement skyrockets when they care about what they’re reading.
    • Keep the structure flexible: A short outline helps, but students thrive when given room to speak naturally.
    • Offer tech tools as optional, not mandatory: Some students enjoyed using Filmora9 or Canva, while others used the camera app on their phone.
    • Focus on voice and message, not perfection: Encourage students to focus on authenticity over polish.
    • Create a classroom premiere day: Let students watch each other’s videos and celebrate their peers’ voices.

    Literacy is personal, public, and powerful

    This project proved what every educator already knows: When students are given the opportunity to express themselves in meaningful ways, they rise to the occasion. Through book reviews, my students weren’t just practicing reading comprehension, they were becoming speakers, storytellers, editors, and advocates for literacy.

    They reminded me and will continue to remind others that when young people talk about books in their own voices, with their personal stories woven into the narrative, something beautiful happens: Reading becomes contagious.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    CHICAGO, IL (GLOBE NEWSWIRE) — Otus, a leading provider of K-12 student data and assessment solutions, has been awarded a prestigious Gold Stevie® Award in the category of Customer Service Department of the Year at the 2025 American Business Awards®. This recognition celebrates the company’s unwavering commitment to supporting educators, students, and families through exceptional service and innovation.

    In addition to the Gold award, Otus also earned two Silver Stevie® Awards: one for Company of the Year – Computer Software – Medium Size, and another honoring Co-founder and President Chris Hull as Technology Executive of the Year.

    “It is an incredible honor to be recognized, but the real win is knowing our work is making a difference for educators and students,” said Hull. “As a former teacher, I know how difficult it can be to juggle everything that is asked of you. At Otus, we focus on building tools that save time, surface meaningful insights, and make student data easier to use—so teachers can focus on what matters most: helping kids grow.”

    The American Business Awards®, now in their 23rd year, are the premier business awards program in the United States, honoring outstanding performances in the workplace across a wide range of industries. The competition receives more than 12,000 nominations every year. Judges selected Otus for its outstanding 98.7% customer satisfaction with chat interactions, and exceptional 89% gross retention in 2024. They also praised the company’s unique blend of technology and human touch, noting its strong focus on educator-led support, onboarding, data-driven product evolution, and professional development.

    “We believe great support starts with understanding the realities educators face every day. Our Client Success team is largely made up of former teachers and school leaders, so we speak the same language. Whether it’s during onboarding, training, or day-to-day communication, we’re here to help districts feel confident and supported. This recognition is a reflection of how seriously we take that responsibility and energizes us to keep raising the bar,” said Phil Collins, Ed.D., Chief Customer Officer at Otus.

    Otus continues to make significant strides in simplifying teaching and learning by offering a unified platform that integrates assessment, data, and instruction—all in one place. Otus has supported over 1 million students nationwide by helping educators make data-informed decisions, monitor progress, and personalize learning. These honors reflect the company’s growth, innovation, and steadfast commitment to helping school communities succeed.

    About Otus

    Otus, an award-winning edtech company, empowers educators to maximize student performance with a comprehensive K-12 assessment, data, and insights solution. Committed to student achievement and educational equity, Otus combines student data with powerful tools that provide educators, administrators, and families with the insights they need to make a difference. Built by teachers for teachers, Otus creates efficiencies in data management, assessment, and progress monitoring to help educators focus on what matters most—student success. Today, Otus partners with school districts nationwide to create informed, data-driven learning environments. Learn more at Otus.com.

    Stay connected with Otus on LinkedIn, Facebook, X, and Instagram.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)



    Source link

  • New (old) models of teaching and assessment

    New (old) models of teaching and assessment

    On the face of it, saying that if we stopped teaching we would not need examinations sounds crazy.

    But it is not so hard to think of examples of rigorous assessment that do not entail examinations in the sense of written responses to a set of predetermined questions.

    For example, institutions regularly award PhDs to candidates who successfully demonstrate their grasp of a subject and associated skills, without requiring them to sit a written examination paper. The difference of course is that PhD students are not taught a fixed syllabus.

    The point of a PhD thesis is to demonstrate a unique contribution to knowledge of some kind. And as it is unique then it is not possible to set examination questions in advance to test it.

    What are we trying to assess?

    If written examinations are inappropriate for PhDs, then why are they the default mode of assessment for undergraduate and taught postgraduate students? The clue, of course, is in the word “taught”. If the primary intended learning outcomes of a course of study require all students to acquire the same body of knowledge and skills, as taught in the course, to the same level, then written examinations are a logical and efficient institutional response.

    But surely what we want as students, teachers, employers, professional bodies and funding bodies is graduates who are not just able to reproduce old knowledge and select solutions to a problem from a repertoire of previously learned responses? So why does so much undergraduate and postgraduate education emphasise teaching examinable knowledge and skills rather than developing more autonomous learners capable of constructing their own knowledge?

    It is not true that learners lack the motivation and ability to be autodidacts – the evidence of my young grandchildren acquiring complex cognitive skills (spoken language) and of motor abilities (walking and running) suggests we have all done it in the past. And the comprehensive knowledge of team players and team histories exhibited by football fans, and the ease and confidence with which some teenagers can strip down and reassemble a motorcycle engine suggest that autodidacticism is not confined to our early years.

    An example from design

    Is it feasible, practical or economic to run courses that offer undergraduates an educational framework within which to pursue and develop personal learning goals, akin to a PhD, but at a less advanced level? In this case, my own experience suggests we can. I studied at undergraduate level for four years, at the end of which I was awarded an honours degree. During the entire four years there were no written examinations.

    I was just one of many art and design students following programmes of study regulated and approved at a national level in the UK by the Council for National Academic Awards (CNAA).

    According to the QAA Art and Design subject benchmark statement:

    Learning in art and design stimulates the development of an enquiring, analytical and creative approach, and develops entrepreneurial capabilities. It also encourages the acquisition of independent judgement and critical self-awareness. Commencing with the acquisition of an understanding of underlying principles and appropriate knowledge and skills, students normally pursue a course of staged development progressing to increasingly independent learning.

    Of course some of the “appropriate knowledge and skills” referred to are subject specific, for example sewing techniques, material properties and history of fashion for creating fashion designs; properties of materials, industrial design history and machining techniques for product design; digital image production and historic stylistic trends in illustration and advertising for graphic design, and so on.

    Each subject has its own set of techniques and knowledge, but a lot of what students learn is determined by lines of enquiry selected by students themselves in response to design briefs set by course tutors. To be successful in their study they must learn to operate with a high degree of independence and self-direction, in many ways similar to PhD students.

    Lessons without teaching

    This high degree of independence and self-direction as learners has traditionally been fostered through an approach that differs crucially from the way most other undergraduate courses are taught.

    Art and design courses are organised around a series of questions or provocations called design briefs that must be answered, rather than around a series of answers or topics that must be learned. The learning that takes place is a consequence of activities undertaken by students to answer the design brief. Answers to briefs generated by art and design students still have to be assessed of course, but because the formal taught components (machining techniques, material properties, design history, etc.) are only incidental to the core intended learning outcomes (creativity, exploration, problem solving) then written examinations on these topics would be only marginally relevant.

    What is more important on these courses is what students have learned rather than what they have been taught, and a lot of what they have learned has been self-taught, albeit through carefully contrived learning activities and responsive guidance from tutors to scaffold the learning. Art and design students learn how to present their work for assessment through presentation of the designed artefact (ie. “the answer”), supported by verbal, written and illustrated explanations of the rationale for the final design and the development process that produced it, often shared with their peers in a discussion known as a “crit” (critique). Unlike written examinations, this assessment process is an authentic model of how students’ work will be judged in their future professional practice. It thus helps to develop important workplace skills.

    Could it work for other subjects?

    The approach to art and design education described here has been employed globally since the mid-twentieth century. However, aspects of the approach are evident in other subject domains, variously called “problem based learning”, “project based learning” and “guided discovery learning”. It has been successfully deployed in medical education but also in veterinary sciences, engineering, nursing , mathematics, geography and others. So why are traditional examinations still the de facto approach to assessment across most higher education disciplines and institutions?

    One significant barrier to adoption is the high cost of studio-based teaching at a time when institutions are under pressure to increase numbers while reducing costs. The diversity of enquiries initiated by art and design students responding to the same design brief requires high levels of personalised learning support, varied resources and diversity of staff expertise.

    Is now the time?

    As with other subjects, art and design education has been under attack from the combined forces of politics and market economics . In the face of such trends it might be considered naive to suggest that such an approach should be adopted more widely rather than less. But although these pressures to reduce costs and increase conformity will likely continue and accelerate in the future, there is another significant force at play now in the form of generative AI tools.

    These have the ability to write essays, but they can also suggest template ideas, solve maths problems, and generate original images from a text prompt, all in a matter of seconds. It is possible now to enter an examination question into one of several widely available online generative AIs and to receive a rapid response that is detailed, knowledgeable and plausible (if not always entirely accurate). If anyone is in any doubt about the ability of the current generation of AIs to generate successful examination question answers then the record of examinations passed by ChatGPT will be sobering reading.

    It is possible that a shift from asking questions, (answers to which the questioner already knows), to presenting learners with authentic problems that assess ability to present, explain, and justify their responses – is a way through the concerns that AI generated responses to other assessment forms present.

    Source link

  • From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    Source link

  • However: the curriculum and assessment review

    However: the curriculum and assessment review

    • Professor Sir Chris Husbands was Vice-Chancellor of Sheffield Hallam University between 2016 and 2023 and is now a Director of  Higher Futures, working with university leaders to lead sustainable solutions to institutional challenges.

    Almost everyone has views on the school curriculum. It’s too academic; it’s not academic enough; it’s too crowded; it has major omissions; it’s too subject-dominated; it doesn’t spend enough time on subject depth.  Debates about the curriculum can be wearying: just as everyone has a view on the school curriculum, so almost everyone has views about what should be added to it, though relatively few people have equally forceful ideas about what should be dropped to make room for Latin, or personal finance education, or more civic education and so on.

    One of the achievements of Becky Francis’s interim report on school curriculum and assessment is that it tries to turn most of these essentially philosophical (or at least opinionated) propositions into debates about evidence and effectiveness and to use those conclusions to set out a route to more specific recommendations which will follow later in the year. It’s no small achievement.  As the report says, and as Becky has maintained in interviews, ‘all potential reforms come with trade-offs’ (p 8); the key is to be clear about the nature of those trade-offs so that there can be an open, if essentially political debate about how to weight them.

    The methodology adopted by Becky and her panel points towards an essentially evolutionary approach for both curriculum and assessment reform.  The first half of that quoted sentence on trade-offs is an assertion that ‘our system is not perfect’ (p 8) and of course, no system is. But the report is largely positive about key building blocks of the system, and it proposes that they will remain: the structure of four key stages, which has been in place since the 1980s; the early focus on phonics as the basis of learning to read, which has been a focus of policy since the 2000s; the knowledge-rich, subject-based approach which has been in place for the last decade; and the essentials of the current assessment arrangements with formal testing at the end of Key Stage 2 (age 11), key stage 4 (essentially GCSEs) and post-16 which were established in the 1988 Education Reform Act.

    More directly relevant to higher education, the report’s view is that ‘the A level route is seen as strong, well-respected and widely recognised, and facilitates progression to higher education’ (p 30) and that ‘A-levels provide successful preparation for a three-year degree’ (p 7).  Whilst the review talks about returning to assess ‘whether there are opportunities to reduce the overall volume of assessment at key stage 4’ (p 41), it does not propose doing so for A-level. The underlying message is one of system stability, because ‘many aspects of the current system are working well’ (p 5).

    However: one of the most frequently used words in the interim report is, in fact, ‘however’: the word appears 29 times on 37 pages of body text, and that doesn’t include synonyms including ‘but’ (32 appearances), ‘while’ (19 appearances) and a single ‘on the other hand’.  Frequently, ‘however’ is used to undercut an initial judgement. The national curriculum has been a success (p 17),'[h]owever, excellence is not yet provided for all: persistent gaps remain’, The panel “share the widely held ambition to promote high standards. However, in practice, “high standards” currently too often means ‘high standards for some’”(p 5).

    These ‘however’ formulations have three effects: first, and not unreasonably in an interim report, they defer difficult questions for the final report.  The final report promises deep dives ‘to diagnose each subject’s specific issues and explore and test a range of solutions’, and ‘about the specificity, relevance, volume and diversity of content’ (p.42). It’s this which will prove very tough for the panel, because it is always detail which challenges in curriculum change. If the curriculum as a whole is always a focus for energetic debate, individual subjects and their structure invariably arouse very strong passions. The report sets up a future debate here about teacher autonomy, arguing, perhaps controversially in an implied ‘however’ that ‘lack of specificity can, counter-intuitively, contribute to greater curriculum volume, as teachers try to cover all eventualities’ (p 28). 

    Secondly, and in almost every case, the ‘however’ undercuts the positive systems judgement: ‘the system is broadly working well, and we intend to retain the mainstay of existing arrangements. However, there are opportunities for improvement’ (p 8).  It’s a repeated rhetorical device which plays both to broad stability and the need for extensive change, and it suggests that some of the technical challenges are going to rest on value – and so political – judgements about how to balance the competing needs of different groups. Sometimes the complexity of those interests overwhelms the systems judgements. The review’s intention is to return to 16-19 questions, “with the aim of building on the successes of existing academic and technical pathways, particularly considering [possibly another implied ‘however’] how best to support learners who do not study A levels or T Levels” (p 9) is right to focus on the currently excluded, but the problem is often mapping a route through overly rigid structures.

    The qualifications system has been better geared for higher attainers, perhaps exemplified by the EBacc [English Baccalaureate] of conventional academic subjects.  Although the Panel cites evidence that a portfolio of academic subjects aids access to higher education, ‘there is little evidence to suggest that the EBacc combination [of subjects] per se has driven better attendance to Russell Group universities’ (p 24) – the latter despite the rapid growth of high tariff universities’ market share over recent years. This issue is linked to one of the most curious aspects of the report from an evidential point of view.  It is overwhelmingly positive about T-levels, ‘a new, high-quality technical route for young people who are clear about their intended career destination’ which ‘show great promise’ (p 7). But (“however”) take up (2% of learners) has been very poor, and not just because not all 16-year-olds are ‘clear about their intended career pathway’.   The next phase of the Review promises to  ‘look at how we can achieve the aim of a simpler, clearer offer which provides strong academic and technical/vocational pathways for all’ (p 31).  But that ‘simpler, clearer offer’ has defied either technical design or political will for a very long time. If it is to succeed, the review will need to consider approaches which allow combinations of vocational and academic qualifications at 16-19, partly because much higher education is both vocational and academic and more because at age 16, most learners do not have an ‘intended career pathway’.

    And thirdly, related to that, the ‘howevers’ unveil a theme which looms over the report, the big challenge for national reform which seeks to deliver excellence for all. Pulling evidence together from across the report tells us that 80% of pupils met the expected standard in the phonics screening check and at age 11, 61% of pupils achieved the expected standards in reading, writing and maths (p 17). Some 40% of young people did not achieve level 2 (a grade 4 or above at GCSE) in English and maths by age 16 (p 30). To simplify: attainment gaps open early; they are not closed by the curriculum and assessment system, and one of the few graphs in the report (p 18) suggests that they are widening, leaving behind a large minority of learners who struggle to access a qualifications system which is not working for them.  As the report says, the requirement to repeat GCSE English and Maths has been especially problematic.  

    The report is thorough, technical and thoughtful; it is evolutionary not revolutionary, and none the worse for that. Curriculum and assessment policy is full of interconnection and unintended consequences.  There are tough challenges in system design to secure excellence and equity, inclusion and attainment, and to address those ‘howevers’. The difficult decisions have been left for the final report. 

    Source link

  • Cheating matters but redrawing assessment “matters most”

    Cheating matters but redrawing assessment “matters most”

    Conversations over students using artificial intelligence to cheat on their exams are masking wider discussions about how to improve assessment, a leading professor has argued.

    Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, argued that “validity matters more than cheating,” adding that “cheating and AI have really taken over the assessment debate.”

    Speaking at the conference of the U.K.’s Quality Assurance Agency, he said, “Cheating and all that matters. But assessing what we mean to assess is the thing that matters the most. That’s really what validity is … We need to address it, but cheating is not necessarily the most useful frame.”

    Dawson was speaking shortly after the publication of a survey conducted by the Higher Education Policy Institute, which found that 88 percent of U.K. undergraduates said they had used AI tools in some form when completing assessments.

    But the HEPI report argued that universities should “adopt a nuanced policy which reflects the fact that student use of AI is inevitable,” recognizing that chat bots and other tools “can genuinely aid learning and productivity.”

    Dawson agreed, arguing that “assessment needs to change … in a world where AI can do the things that we used to assess,” he said.

    Referencing—citing sources—may be a good example of something that can be offloaded to AI, he said. “I don’t know how to do referencing by hand, and I don’t care … We need to take that same sort of lens to what we do now and really be honest with ourselves: What’s busywork? Can we allow students to use AI for their busywork to do the cognitive offloading? Let’s not allow them to do it for what’s intrinsic, though.”

    It was a “fantasy land” to introduce what he called “discursive” measures to limit AI use, where lecturers give instructions on how AI use may or may not be permitted. Instead, he argued that “structural changes” were needed for assessments.

    “Discursive changes are not the way to go. You can’t address this problem of AI purely through talk. You need action. You need structural changes to assessment [and not just a] traffic light system that tells students, ‘This is an orange task, so you can use AI to edit but not to write.”

    “We have no way of stopping people from using AI if we aren’t in some way supervising them; we need to accept that. We can’t pretend some sort of guidance to students is going to be effective at securing assessments. Because if you aren’t supervising, you can’t be sure how AI was or wasn’t used.”

    He said there are three potential outcomes for the impact on grades as AI develops: grade inflation, where people are going to be able to do “so much more against our current standards, so things are just going to grow and grow”; and norm referencing, where students are graded on how they perform compared to other students.

    The final option, which he said was preferable, was “standards inflation,” “where we just have to keep raising the standards over time, because what AI plus a student can do gets better and better.”

    Over all, the impact of AI on assessments is fundamental, he said, adding, “The times of assessing what people know are gone.”

    Source link