Category: Assessment

  • Mental health screeners help ID hidden needs, research finds

    Mental health screeners help ID hidden needs, research finds

    Key points:

    A new DESSA screener to be released for the Fall ‘25 school year–designed to be paired with a strength-based student self-report assessment–accurately predicted well-being levels in 70 percent of students, a study finds.  

    According to findings from Riverside Insights, creator of research-backed assessments, researchers found that even students with strong social-emotional skills often struggle with significant mental health concerns, challenging the assumption that resilience alone indicates student well-being. The study, which examined outcomes in 254 middle school students across the United States, suggests that combining risk and resilience screening can enable identification of students who would otherwise be missed by traditional approaches. 

    “This research validates what school mental health professionals have been telling us for years–that traditional screening approaches miss too many students,” said Dr. Evelyn Johnson, VP of Research & Development at Riverside Insights. “When educators and counselors can utilize a dual approach to identify risk factors, they can pinpoint concerns and engage earlier, in and in a targeted way, before concerns become major crises.”

    The study, which offered evidence of, for example, social skills deficits among students with no identifiable or emotional behavioral concerns, provides the first empirical evidence that consideration of both risk and resilience can enhance the predictive benefits of screening, when compared to  strengths-based screening alone.

    In the years following COVID, many educators noted a feeling that something was “off” with students, despite DESSA assessments indicating that things were fine.

    “We heard this feedback from lots of different customers, and it really got our team thinking–we’re clearly missing something, even though the assessment of social-emotional skills is critically important and there’s evidence to show the links to better academic outcomes and better emotional well-being outcomes,” Johnson said. “And yet, we’re not tapping something that needs to be tapped.”

    For a long time, if a person displayed no outward or obvious mental health struggles, they were thought to be mentally healthy. In investigating the various theories and frameworks guiding mental health issues, Riverside Insight’s team dug into Dr. Shannon Suldo‘s work, which centers around the dual factor model.

    “What the dual factor approach really suggests is that the absence of problems is not necessarily equivalent to good mental health–there really are these two factors, dual factors, we talk about them in terms of risk and resilience–that really give you a much more complete picture of how a student is doing,” Johnson said.

    “The efficacy associated with this dual-factor approach is encouraging, and has big implications for practitioners struggling to identify risk with limited resources,” said Jim Bowler, general manager of the Classroom Division at Riverside Insights. “Schools told us they needed a way to identify students who might be struggling beneath the surface. The DESSA SEIR ensures no student falls through the cracks by providing the complete picture educators need for truly preventive mental health support.”

    The launch comes as mental health concerns among students reach crisis levels. More than 1 in 5 students considered attempting suicide in 2023, while 60 percent of youth with major depression receive no mental health treatment. With school psychologist-to-student ratios at 1:1065 (recommended 1:500) and counselor ratios at 1:376 (recommended 1:250), schools need preventive solutions that work within existing resources.

    The DESSA SEIR will be available for the 2025-2026 school year.

    This press release originally appeared online.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • Peer review is broken, and pedagogical research has a fix

    Peer review is broken, and pedagogical research has a fix

    An email pings into my inbox: peer reviewer comments on your submission #1234. I take a breath and click.

    Three reviewers have left feedback on my beloved paper. The first reviewer is gentle, constructive, and points out areas where the work could be tightened up. One reviewer simply provides a list of typos and points out where the grammar is not technically correct. The third reviewer is vicious. I stop reading.

    Later that afternoon, I sit in the annual student assessment board for my department. Over a painstaking two hours, we discuss, interrogate, and wrestle with how we, as educators, can improve our feedback practices when we mark student work. We examine the distribution of students marks closely, looking out for outliers, errors, or evidence of an ill-pitched assessment. We reflect upon how we can make our written feedback more useful. We suggest thoughtful and innovative ways to make our practice more consistent and clearer.

    It then strikes me how these conversations happen in parallel – peer review sits in one corner of academia, and educational assessment and feedback sits in another. What would happen, I wonder, if we started approaching peer review as a pedagogical problem?

    Peer review as pedagogy

    Peer review is a high stakes context. We know that we need proper, expert scrutiny of the methodological, theoretical, and analytical claims of research to ensure the quality, credibility, and advancement of what we do and how we do it. However, we also know that there are problems with the current peer review system. As my experience attests to, issues including reviewer biases and conflicts, lack of transparency in editorial decision-making, inconsistencies in the length and depth of reviewer feedback all plague our experiences. Peer reviewers can be sharp, hostile, and unconstructive. They can focus on the wrong things, be unhelpful in their vagueness, or miss the point entirely. These problems threaten the foundations of research.

    The good news is that we do not have to reinvent the wheel. For decades, people in educational research, or the scholarship of teaching and learning (SoTL), have been grappling both theoretically and empirically with the issue of giving and receiving feedback. Educational research has considered best practices in feedback presentation and content, learner and marker feedback literacies, management of socioemotional responses to feedback, and transparency of feedback expectations. The educational feedback literature is vast and innovative.

    However – curiously – efforts to improve the integrity of peer review don’t typically frame this as a pedagogical problem, that can borrow insights from the educational literature. This is, I think, a woefully missed opportunity. There are at least four clear initiatives from the educational scholarship that could be a useful starting point in tightening up the rigour of peer review.

    What is feedback for?

    We would rarely mark student work without a clear assessment rubric and standardised assessment criteria. In other words, as educators we wouldn’t sit down to assess students work without at least first considering what we have asked them to do. What are the goalposts? What are the outcomes? What are we giving feedback for?

    Rubrics and assessment criteria provide transparent guidelines on what is expected of learners, in an effort to demystify the hidden curriculum of assessment and reduce subjectivity in assessment practice. In contrast, peer reviewers are typically provided with scant information about what to assess manuscripts for, which can lead to inconsistencies between journal aims and scope, reviewer comments, and author expectations.

    Imagine if we had structured journal-specific rubrics, based on specific, predefined criteria that aligned tightly with the journal’s mission and requirements. Imagine if these rubrics guided decision-making and clarified the function of feedback, rather than letting reviewers go rogue with their own understanding of what the feedback is for.

    Transparent rubrics and criteria could also bolster the feedback literacy of reviewers and authors. Feedback literacy is an established educational concept, which refers to a student’s capacity to appreciate, make sense of, and act upon their written feedback. Imagine if we approached peer review as an opportunity to develop feedback literacy, and we borrowed from this literature.

    Do we all agree?

    Educational research clearly highlights the importance of moderation and calibration for educators to ensure consistent assessment practices. We would never allow grades to be returned to students without some kind of external scrutiny first.

    Consensus calibration refers to the practice of multiple evaluators working together to ensure consistency in their feedback and to agree upon a shared understanding of relevant standards. There is a clear and robust steer from educational theory that this is a useful exercise to minimise bias and ensure consistency in feedback. This practice is not typically used in peer review.

    Calibration exercises, where reviewers assess the same manuscript and have opportunity to openly discuss their evaluations, might be a valuable and evidence-based addition to the peer review process. This could be achieved in practice by more open peer review processes, where reviewers can see the comments of others and calibrate accordingly, or through a tighter steer from editors when recruiting new reviewers.

    That is not to say, of course, that reviewers should all agree on the quality of a manuscript. But any effort to consolidate, triangulate, and calibrate feedback can only be useful to authors as they attempt to make sense of it.

    Is this feedback timely?

    Best practice in educational contexts also supports the adoption of opportunities to provide formative feedback. Formative feedback is feedback that helps learners improve as they are learning, as opposed to summative feedback whereby the merit of a final piece of work is evaluated. In educational contexts, this might look like anything from feedback on drafts through to informal check-in conversations with markers.

    Applying the formative/summative distinction to peer review may be useful in helping authors improve their work in dialogue with reviewers and editors, rather than purely summative, which would merely judge whether the manuscript is fit for publication. In practice, adoption of this can be achieved through the formative feedback offered by registered reports, whereby authors receive peer review and editorial direction before data is collected or accessed, at a time where they can actually make use ot it.

    Formative feedback through the adoption of registered reports can provide opportunity for specific and timely suggestions for improving the methodology or research design. By fostering a more developmental and formative approach to peer review, the process can become a tool for advancing knowledge, rather than simply a gatekeeping mechanism.

    Is this feedback useful?

    Finally, the educational concept of feedforward, which focuses on providing guidance for future actions rather than only critiquing past performance, needs to be applied to peer review too. By applying feedforward principles, reviewers can shift their feedback to be more forward-looking, offering tangible, discrete, and actionable suggestions that help the author improve their work in subsequent revisions.

    In peer review, approaching comments with a feedforward framing may transform feedback into a constructive dialogue that motivates people to make their work better by taking actionable steps, rather than a hostile exchange built upon unclear standards and (often) mismatched expectations.

    So the answers to improving some parts of the peer review process are there. We can, if we’re clever, really improve the fairness, consistency, and developmental value of reviewer comments. Structured assessment criteria, calibration, formative feedback mechanisms, and feedforward approaches are just a few strategies that can enhance the integrity of peer review. The answers are intuitive – but they are not yet standard practice in peer review because we typically don’t approach peer review as pedagogy.

    There are some problems that this won’t fix. Peer review relies on the unpaid labour of time-poor academics in an increasingly precarious academia, which adds challenge to efforts to improve the integrity of the process.

    However, there are steps we can take – we need to now think about how these can be achieved in practice. By clarifying the peer review practice, tightening up the rigour of feedback quality, and applying educational interventions to improve the process, this takes an important step in fixing peer review for the future of research.

    Source link

  • Student-created book reviews inspire a global reading culture

    Student-created book reviews inspire a global reading culture

    Key points:

    When students become literacy influencers, reading transforms from a classroom task into a global conversation.

    When teens take the mic

    Recent studies show that reading for pleasure among teens is at an all-time low. According to the National Assessment of Educational Progress (NAEP), only 14 percent of U.S. students read for fun almost every day–down from 31 percent in 1984. In the UK, the National Literacy Trust reports that just 28 percent of children aged 8 to 18 said they enjoyed reading in their free time in 2023.

    With reading engagement in crisis, one group of teens decided to flip the narrative–by turning on their cameras. What began as a simple classroom project to encourage reading evolved into a movement that amplified student voices, built confidence, and connected learners across cultures.

    Rather than writing traditional essays or book reports, my students were invited to create short video book reviews of their favorite titles–books they genuinely loved, connected with, and wanted others to discover. The goal? To promote reading in the classroom and beyond. The result? A library of student-led recommendations that brought books–and readers–to life.

    Project overview: Reading, recording, and reaching the world

    As an ESL teacher, I’ve always looked for ways to make literacy feel meaningful and empowering, especially for students navigating a new language and culture. This video review project began with a simple idea: Let students choose a book they love, and instead of writing about it, speak about it. The assignment? Create a short, personal, and authentic video to recommend the book to classmates–and potentially, to viewers around the world.

    Students were given creative freedom to shape their presentations. Some used editing apps like Filmora9 or Canva, while others recorded in one take on a smartphone. I offered a basic outline–include the book’s title and author, explain why you loved it, and share who you’d recommend it to–but left room for personal flair.

    What surprised me most was how seriously students took the project. They weren’t just completing an assignment–they were crafting their voices, practicing communication skills, and taking pride in their ability to share something they loved in a second language.

    Student spotlights: Book reviews with heart, voice, and vision

    Each student’s video became more than a book recommendation–it was an expression of identity, creativity, and confidence. With a camera as their platform, they explored their favorite books and communicated their insights in authentic, impactful ways.

    Mariam ElZeftawy: The Fault in Our Stars by John Green
    Watch Miriam’s Video Review

    Mariam led the way with a polished and emotionally resonant video review of John Green’s The Fault in Our Stars. Using Filmora9, she edited her video to flow smoothly while keeping the focus on her heartfelt reflections. Mariam spoke with sincerity about the novel’s themes: love, illness, and the fragility of life. She communicated them in a way that was both thoughtful and relatable. Her work demonstrated not only strong literacy skills but also digital fluency and a growing sense of self-expression.

    Dana: Dear Tia by Maria Zaki
    Watch Dana’s Video Review

    In one of the most touching video reviews, Dana, a student who openly admits she’s not an avid reader, chose to spotlight “Dear Tia,” written by Maria Zaki, her best friend’s sister. The personal connection to the author didn’t just make her feel seen; it made the book feel more real, more urgent, and worth talking about. Dana’s honest reflection and warm delivery highlight how personal ties to literature can spark unexpected enthusiasm.

    Farah Badawi: Utopia by Ahmed Khaled Towfik
    Watch Farah’s Video Review

    Farah’s confident presentation introduced her classmates to Utopia, a dystopian novel by Egyptian author Ahmed Khaled Towfik. Through her review, she brought attention to Arabic literature, offering a perspective that is often underrepresented in classrooms. Farah’s choice reflected pride in her cultural identity, and her delivery was clear, persuasive, and engaging. Her video became more than a review–it was a form of cultural storytelling that invited her peers to expand their literary horizons.

    Rita Tamer: Frostblood
    Watch Rita’s Video Review

    Rita’s review of Frostblood, a fantasy novel by Elly Blake, stood out for its passionate tone and concise storytelling. She broke down the plot with clarity, highlighting the emotional journey of the protagonist while reflecting on themes like power, resilience, and identity. Rita’s straightforward approach and evident enthusiasm created a strong peer-to-peer connection, showing how even a simple, sincere review can spark curiosity and excitement about reading.

    Literacy skills in action

    Behind each of these videos lies a powerful range of literacy development. Students weren’t just reviewing books–they were analyzing themes, synthesizing ideas, making connections, and articulating their thoughts for an audience. By preparing for their recordings, students learned how to organize their ideas, revise their messages for clarity, and reflect on what made a story impactful to them personally.

    Speaking to a camera also encouraged students to practice intonation, pacing, and expression–key skills in both oral language development and public speaking. In multilingual classrooms, these skills are often overlooked in favor of silent writing tasks. But in this project, English Learners were front and center, using their voices–literally and figuratively–to take ownership of language in a way that felt authentic and empowering.

    Moreover, the integration of video tools meant students had to think critically about how they presented information visually. From editing with apps like Filmora9 to choosing appropriate backgrounds, they were not just absorbing content, they were producing and publishing it, embracing their role as creators in a digital world.

    Tips for teachers: Bringing book reviews to life

    This project was simple to implement and required little more than student creativity and access to a recording device. Here are a few tips for educators who want to try something similar:

    • Let students choose their own books: Engagement skyrockets when they care about what they’re reading.
    • Keep the structure flexible: A short outline helps, but students thrive when given room to speak naturally.
    • Offer tech tools as optional, not mandatory: Some students enjoyed using Filmora9 or Canva, while others used the camera app on their phone.
    • Focus on voice and message, not perfection: Encourage students to focus on authenticity over polish.
    • Create a classroom premiere day: Let students watch each other’s videos and celebrate their peers’ voices.

    Literacy is personal, public, and powerful

    This project proved what every educator already knows: When students are given the opportunity to express themselves in meaningful ways, they rise to the occasion. Through book reviews, my students weren’t just practicing reading comprehension, they were becoming speakers, storytellers, editors, and advocates for literacy.

    They reminded me and will continue to remind others that when young people talk about books in their own voices, with their personal stories woven into the narrative, something beautiful happens: Reading becomes contagious.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    CHICAGO, IL (GLOBE NEWSWIRE) — Otus, a leading provider of K-12 student data and assessment solutions, has been awarded a prestigious Gold Stevie® Award in the category of Customer Service Department of the Year at the 2025 American Business Awards®. This recognition celebrates the company’s unwavering commitment to supporting educators, students, and families through exceptional service and innovation.

    In addition to the Gold award, Otus also earned two Silver Stevie® Awards: one for Company of the Year – Computer Software – Medium Size, and another honoring Co-founder and President Chris Hull as Technology Executive of the Year.

    “It is an incredible honor to be recognized, but the real win is knowing our work is making a difference for educators and students,” said Hull. “As a former teacher, I know how difficult it can be to juggle everything that is asked of you. At Otus, we focus on building tools that save time, surface meaningful insights, and make student data easier to use—so teachers can focus on what matters most: helping kids grow.”

    The American Business Awards®, now in their 23rd year, are the premier business awards program in the United States, honoring outstanding performances in the workplace across a wide range of industries. The competition receives more than 12,000 nominations every year. Judges selected Otus for its outstanding 98.7% customer satisfaction with chat interactions, and exceptional 89% gross retention in 2024. They also praised the company’s unique blend of technology and human touch, noting its strong focus on educator-led support, onboarding, data-driven product evolution, and professional development.

    “We believe great support starts with understanding the realities educators face every day. Our Client Success team is largely made up of former teachers and school leaders, so we speak the same language. Whether it’s during onboarding, training, or day-to-day communication, we’re here to help districts feel confident and supported. This recognition is a reflection of how seriously we take that responsibility and energizes us to keep raising the bar,” said Phil Collins, Ed.D., Chief Customer Officer at Otus.

    Otus continues to make significant strides in simplifying teaching and learning by offering a unified platform that integrates assessment, data, and instruction—all in one place. Otus has supported over 1 million students nationwide by helping educators make data-informed decisions, monitor progress, and personalize learning. These honors reflect the company’s growth, innovation, and steadfast commitment to helping school communities succeed.

    About Otus

    Otus, an award-winning edtech company, empowers educators to maximize student performance with a comprehensive K-12 assessment, data, and insights solution. Committed to student achievement and educational equity, Otus combines student data with powerful tools that provide educators, administrators, and families with the insights they need to make a difference. Built by teachers for teachers, Otus creates efficiencies in data management, assessment, and progress monitoring to help educators focus on what matters most—student success. Today, Otus partners with school districts nationwide to create informed, data-driven learning environments. Learn more at Otus.com.

    Stay connected with Otus on LinkedIn, Facebook, X, and Instagram.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)



    Source link

  • What does it mean if students think that AI is more intelligent than they are?

    What does it mean if students think that AI is more intelligent than they are?

    The past couple of years in higher education have been dominated by discussions of generative AI – how to detect it, how to prevent cheating, how to adapt assessment. But we are missing something more fundamental.

    AI isn’t just changing how students approach their work – it’s changing how they see themselves. If universities fail to address this, they risk producing graduates who lack both the knowledge and the confidence to succeed in employment and society. Consequently, the value of a higher education degree will diminish.

    In November, a first-year student asked me if ChatGPT could write their assignment. When I said no, they replied: “But AI is more intelligent than me.” That comment has stayed with me ever since.

    If students no longer trust their own ability to contribute to discussions or produce work of value, the implications stretch far beyond academic misconduct. Confidence is affecting motivation, resilience and self-belief, which, consequently, effects sense of community, assessment grades, and graduate skills.

    I have noticed that few discussions focus on the deeper psychological shift – students’ changing perceptions of their own intelligence and capability. This change is a key antecedent for the erosion of a sense of community, AI use in learning and assessment, and the underdevelopment of graduate skills.

    The erosion of a sense of community

    In 2015 when I began teaching, I would walk into a seminar room and find students talking to one another about how worried they were for the deadline, how boring the lecture was, or how many drinks they had Wednesday night. Yes, they would sit at the back, not always do the pre-reading, and go quiet for the first few weeks when I asked a question – but they were always happy to talk to one another.

    Fast forward to 2025, campus feels empty, and students come into class and sit alone. Even final years who have been together for three years, may sit with a “friend” but not really say anything as they stare at phones. I have a final year student who is achieving first class grades, but admitted he has not been in the library once this academic year and he barely knows anyone to talk to. This may not seem like a big thing, but it illustrates the lack of community and relationships that are formed at university. It is well known that peer-to-peer relationships are one of the biggest influencers on attendance and engagement. So when students fail to form networks, it is unsurprising that motivation declines.

    While professional services, student union, and support staff are continuously offering ways to improve the community, at a time where students are working longer hours and through a cost of living, we cannot expect students to attend extracurricular academic or non-academic activities. Therefore, timetabled lectures and seminars need to be at the heart of building relationships.

    AI in learning and assessment

    While marking first-year marketing assignments – a subject I’ve taught across multiple universities for a decade – I noticed a clear shift. Typically, I expect a broad range of marks, but this year, students clustered at two extremes: either very high or alarmingly low. The feedback was strikingly similar: “too vague,” “too descriptive,” “missing taught content.”

    I knew some of these students were engaged and capable in class, yet their assignments told a different story. I kept returning to that student’s remark and realised: the students who normally land in the middle – your solid 2:2 and 2:1 cohort – had turned to AI. Not necessarily to cheat, but because they lacked confidence in their own ability. They believed AI could articulate their ideas better than they could.

    The rapid integration of AI into education isn’t just changing what students do – it’s changing what they believe they can do. If students don’t think they can write as well as a machine, how can we expect them to take intellectual risks, engage critically, or develop the resilience needed for the workplace?

    Right now, universities are at a crossroads. We can either design assessments as if nothing has changed, pivot back to closed-book exams to preserve “authentic” academic work, or restructure assessment to empower students, build confidence, and provide something of real value to both learners and employers. Only the third option moves higher education forward.

    Deakin University’s Phillip Dawson has recently argued that we must ensure assessment measures what we actually intend to assess. His point resonated with me.

    AI is here to stay, and it can enhance learning and productivity. Instead of treating it primarily as a threat or retreating to closed-book exams, we need to ask: what do we really need to assess? For years, we have moved away from exams because they don’t reflect real-world skills or accurately measure understanding. That reasoning still holds, but the assessment landscape is shifting again. Instead of focusing on how students write about knowledge, we should be assessing how they apply it.

    Underdevelopment of graduate skills

    If we don’t rethink pedagogy and assessment, we risk producing graduates who are highly skilled at facilitating AI rather than using it as a tool for deeper analysis, problem-solving, and creativity. Employers are already telling us they need graduates who can analyse and interpret data, think critically to solve problems, communicate effectively, show resilience and adaptability, demonstrate emotional intelligence, and work collaboratively.

    But students can’t develop these skills if they don’t believe in their own ability.

    Right now, students are using AI tools for most activities, including online searching, proof reading, answering questions, generating examples, and even writing reflective pieces. I am confident that if I asked first years to write a two-minute speech about why they came to university, the majority would use AI in some way. There is no space – or incentive – for them to illustrate their skill development.

    This semester, I trialled a small intervention after getting fed up with looking at heads down in laptops. I asked my final year students to put laptops and phones on the floor for the first two hours of a four-hour workshop.

    At first, they were visibly uncomfortable – some looked panicked, others bored. But after ten minutes, something changed. They wrote more, spoke more confidently, and showed greater creativity. As soon as they returned to technology, their expressions became blank again. This isn’t about banning AI, but about ensuring students have fun learning and have space to be thinkers, rather than facilitators.

    Confidence-building

    If students’ lack of confidence is driving them to rely on AI to “play it safe”, we need to acknowledge the systemic problem. Confidence is an academic issue. Confidence underpins everything in the student’s experience: classroom engagement, sense of belonging, motivation, resilience, critical thinking, and, of course, assessment quality. Universities know this, investing in mentorship schemes, support services, and initiatives to foster belonging. But confidence-building cannot be left to professional services alone – it must be embedded into curriculum design and assessment.

    Don’t get me wrong, I am fully aware of the pressures of academic staff, and telling them to improve sense of community, assessment, and graduate skills feels like another time-consuming task. Universities need to recognise that without improving workload planning models to allow academics freedom to focus on and explore pedagogic approaches, we fall into the trap of devaluing the degree.

    In addition, universities want to stay relevant, they need agile structures that allow academics to test new approaches and respond quickly, just like the “real world”. Academics should not be creating or modifying assessments today that won’t be implemented for another 18 months. Policies designed to ensure quality must also ensure adaptability. Otherwise, higher education will always be playing catch-up – first with AI, then with whatever comes next.

    Will universities continue producing AI-dependent graduates, or will they equip students with the confidence to lead in an AI-driven world?

    Source link

  • Creating learning environments that work for BTEC entrants to higher education

    Creating learning environments that work for BTEC entrants to higher education

    We know that past learning experiences directly correlate to progress and preparedness for higher education study. But are we to accept that the adverse relationship with outcomes for different students’ entry routes is driven by academic performance at university?

    There is evidence that students who enter with vocational qualifications are more likely to drop out or get a lower degree classification because of poorer academic performance. This lack of progression is alarming, and initiatives steered to increase progression opportunities that support better overall performance remain both a challenge and a strategic priority for the university sector. HESA statistics for the 2021–22 academic year show the “dropout rate” for first year students with vocational qualifications continues to increase by one percentage point across the sector year on year.

    Furthermore, there remains a consistent four percentage point awarding gap between those with vocational and those with traditional qualifications. Despite their higher dropout and non-progression rates, students progressing from vocational qualifications represent a significant growing pathway into HE and many who progress, go on to graduate with at least a 2.1.

    A 2022 Nuffield report on the relationship between 16-19 subject, higher education choices and graduate outcomes found “…a weakening of the relationship between entry qualifications and outcomes once comparing individuals with similar module scores.” This implies that educators have a significant part to play in ensuring approaches to setting, measuring and enhancing performance are fair and equitable. Specifically, inclusive assessment design should be central to the educational experience, ensuring all students can fulfil their potential irrespective of their route to HE.

    A very particular set of skills

    Ongoing work on student engagement such as this 2023 framework for inclusive and effective student engagement from QAA, has demonstrated clear benefits from creating communities that build identity and belonging though adopting inclusive approaches, enhancing student engagement, motivation and progression. Applying these principles means recognising that students entering HE from vocational routes like BTECs possess unique skills.

    Through their studies they have developed hands-on learning and real-world application, giving them practical skills directly relevant to their chosen field. Additionally, they engage in self-directed projects and coursework, fostering independence and time management skills essential for managing university workloads. Many vocational courses offer work placements, providing valuable career insights that foster a professional mind-set from day one. Unlike traditional A levels, BTECs are assessed through coursework and practical assessments, helping students develop strong research, critical writing, and project management skills.

    All of the above combines with a wealth of lived experience – BTEC students often come from diverse educational backgrounds – which enhances these students’ adaptability and resilience. Furthermore, the emphasis on practical achievements and continuous assessment fosters a positive mindset and a sense of belonging and community. These skills provide vocational students with a solid foundation for success in HE. So what are we not getting right?

    Like many other universities, we recognise each cohort is unique and a one size fits all approach may not have sustained impact. Learning, teaching, and assessment design should provide an equitable experience for all students regardless of prior learning experiences and route into HE. We have streamlined our approaches, drawing on evidence of what is “working” to enable us to embed efficient and effective approaches to being intentionally inclusive within assessment design.

    Five ways to inclusion

    It’s early days, but we are already seeing improvements in the number of students that are passing all modules first time from a variety of entry routes and through approaches that celebrate and embrace the unique skillsets of all students. Through five interconnected themes we are making steady and sustained progress through exploring inclusive assessment practices and reviewing the narrative of learning.

    Supporting student confidence is foundational to academic success. We have found that developing shared assessment literacies can help students recognise their capabilities and potential. This can directly speak to the unique skillset that students bring from a range of diverse routes: for example, creating Hidden Curriculum Guides that unpack unfamiliar language and concepts, drawing from past experiences to socialises the unknown so that students can feel confident in their understanding and learning journey.

    Embedding effective pedagogical approaches employs a blend of student-centred and humanistic methods to create dynamic and responsive learning environments. These approaches are tailored to meet the specific needs of students. Evidence-based approaches include empowering students to bridge the gap between theoretical knowledge and practical application for life-wide learning and preparedness for the journey ahead. These examples not only integrate effective pedagogical approaches but support a range of skillsets, positioning the educational experience through empathy and compassion in developing supportive transition and orientation interventions and deepening the shared understanding of lived experiences.

    Assessment diversity and timely feedback are crucial. Our commitment to inclusive assessment practices creates space where all students can demonstrate their knowledge and skills effectively. Through co-created integrated approach to inclusive assessment, we have produced a set of inclusive assessment and feedback principles: clear, understood, authentic, robust and personalised.

    Creating a sense of belonging is vital for student engagement and retention. Inclusive classroom environments that celebrate diversity and foster community connections help students feel valued and supported. Harnessing the practice elements will bring a newfound confidence to the forefront of the learning experience. Flipping the classroom, so students have a more meaningful experience creates a sticky campus, and a strong sense of togetherness which particularly suit students that have entered HE via a vocational route. Initiatives such as peer mentoring and collaborative projects have been successful in creating a welcoming and inclusive atmosphere.

    Recognising and valuing the diverse entry backgrounds of students not only enhances learning but also promotes equity and inclusion by drawing on the value of their individual learning experiences to enhance their learning journey. We identified the need for targeted support mechanisms that bolster student confidence during the transition to and through HE. Our emphasis on the importance of diverse pedagogical approaches, inclusive assessment practices, and feedback mechanisms provided solid foundations.

    Learning from programme teams about what works to maximise real-world learning from current practice is essential to building trust. Our five-phase approach provides a scaffolding based on our unique learning journey. The challenge remains for us as a sector to address and share knowledge holistically, which draws from evidence-based practice with the aim of enhancing student outcomes. Working collegiately with the student body, this is both an urgent and important issue to address with the growing number of students joining universities from vocational routes. There is a government push to increase capacity for vocational routes in HE and so if universities are to stay relevant in this space, there is an urgency to find solutions, learning from programme leaders who are passionate and best placed to know students. Together and collaboratively, we can drive forward real intervention with sustained impact, it matters for student success.

    For more about the authors’ work to create inclusive learning environments see the special editions of Innovative practice in higher education and Pedagogy collating evidence shared at our learning and teaching festivals in 2023 and 2024.

    Source link

  • New (old) models of teaching and assessment

    New (old) models of teaching and assessment

    On the face of it, saying that if we stopped teaching we would not need examinations sounds crazy.

    But it is not so hard to think of examples of rigorous assessment that do not entail examinations in the sense of written responses to a set of predetermined questions.

    For example, institutions regularly award PhDs to candidates who successfully demonstrate their grasp of a subject and associated skills, without requiring them to sit a written examination paper. The difference of course is that PhD students are not taught a fixed syllabus.

    The point of a PhD thesis is to demonstrate a unique contribution to knowledge of some kind. And as it is unique then it is not possible to set examination questions in advance to test it.

    What are we trying to assess?

    If written examinations are inappropriate for PhDs, then why are they the default mode of assessment for undergraduate and taught postgraduate students? The clue, of course, is in the word “taught”. If the primary intended learning outcomes of a course of study require all students to acquire the same body of knowledge and skills, as taught in the course, to the same level, then written examinations are a logical and efficient institutional response.

    But surely what we want as students, teachers, employers, professional bodies and funding bodies is graduates who are not just able to reproduce old knowledge and select solutions to a problem from a repertoire of previously learned responses? So why does so much undergraduate and postgraduate education emphasise teaching examinable knowledge and skills rather than developing more autonomous learners capable of constructing their own knowledge?

    It is not true that learners lack the motivation and ability to be autodidacts – the evidence of my young grandchildren acquiring complex cognitive skills (spoken language) and of motor abilities (walking and running) suggests we have all done it in the past. And the comprehensive knowledge of team players and team histories exhibited by football fans, and the ease and confidence with which some teenagers can strip down and reassemble a motorcycle engine suggest that autodidacticism is not confined to our early years.

    An example from design

    Is it feasible, practical or economic to run courses that offer undergraduates an educational framework within which to pursue and develop personal learning goals, akin to a PhD, but at a less advanced level? In this case, my own experience suggests we can. I studied at undergraduate level for four years, at the end of which I was awarded an honours degree. During the entire four years there were no written examinations.

    I was just one of many art and design students following programmes of study regulated and approved at a national level in the UK by the Council for National Academic Awards (CNAA).

    According to the QAA Art and Design subject benchmark statement:

    Learning in art and design stimulates the development of an enquiring, analytical and creative approach, and develops entrepreneurial capabilities. It also encourages the acquisition of independent judgement and critical self-awareness. Commencing with the acquisition of an understanding of underlying principles and appropriate knowledge and skills, students normally pursue a course of staged development progressing to increasingly independent learning.

    Of course some of the “appropriate knowledge and skills” referred to are subject specific, for example sewing techniques, material properties and history of fashion for creating fashion designs; properties of materials, industrial design history and machining techniques for product design; digital image production and historic stylistic trends in illustration and advertising for graphic design, and so on.

    Each subject has its own set of techniques and knowledge, but a lot of what students learn is determined by lines of enquiry selected by students themselves in response to design briefs set by course tutors. To be successful in their study they must learn to operate with a high degree of independence and self-direction, in many ways similar to PhD students.

    Lessons without teaching

    This high degree of independence and self-direction as learners has traditionally been fostered through an approach that differs crucially from the way most other undergraduate courses are taught.

    Art and design courses are organised around a series of questions or provocations called design briefs that must be answered, rather than around a series of answers or topics that must be learned. The learning that takes place is a consequence of activities undertaken by students to answer the design brief. Answers to briefs generated by art and design students still have to be assessed of course, but because the formal taught components (machining techniques, material properties, design history, etc.) are only incidental to the core intended learning outcomes (creativity, exploration, problem solving) then written examinations on these topics would be only marginally relevant.

    What is more important on these courses is what students have learned rather than what they have been taught, and a lot of what they have learned has been self-taught, albeit through carefully contrived learning activities and responsive guidance from tutors to scaffold the learning. Art and design students learn how to present their work for assessment through presentation of the designed artefact (ie. “the answer”), supported by verbal, written and illustrated explanations of the rationale for the final design and the development process that produced it, often shared with their peers in a discussion known as a “crit” (critique). Unlike written examinations, this assessment process is an authentic model of how students’ work will be judged in their future professional practice. It thus helps to develop important workplace skills.

    Could it work for other subjects?

    The approach to art and design education described here has been employed globally since the mid-twentieth century. However, aspects of the approach are evident in other subject domains, variously called “problem based learning”, “project based learning” and “guided discovery learning”. It has been successfully deployed in medical education but also in veterinary sciences, engineering, nursing , mathematics, geography and others. So why are traditional examinations still the de facto approach to assessment across most higher education disciplines and institutions?

    One significant barrier to adoption is the high cost of studio-based teaching at a time when institutions are under pressure to increase numbers while reducing costs. The diversity of enquiries initiated by art and design students responding to the same design brief requires high levels of personalised learning support, varied resources and diversity of staff expertise.

    Is now the time?

    As with other subjects, art and design education has been under attack from the combined forces of politics and market economics . In the face of such trends it might be considered naive to suggest that such an approach should be adopted more widely rather than less. But although these pressures to reduce costs and increase conformity will likely continue and accelerate in the future, there is another significant force at play now in the form of generative AI tools.

    These have the ability to write essays, but they can also suggest template ideas, solve maths problems, and generate original images from a text prompt, all in a matter of seconds. It is possible now to enter an examination question into one of several widely available online generative AIs and to receive a rapid response that is detailed, knowledgeable and plausible (if not always entirely accurate). If anyone is in any doubt about the ability of the current generation of AIs to generate successful examination question answers then the record of examinations passed by ChatGPT will be sobering reading.

    It is possible that a shift from asking questions, (answers to which the questioner already knows), to presenting learners with authentic problems that assess ability to present, explain, and justify their responses – is a way through the concerns that AI generated responses to other assessment forms present.

    Source link

  • The importance of consequential feedback

    The importance of consequential feedback

    Imagine this: a business student managing a virtual company makes a poor decision, leading to a simulated bankruptcy. Across campus, a medical student adjusts a treatment in a patient simulation and observes improvements in the virtual patient’s condition.

    When students practice in a simulated real-world environment they have access to a rich set of feedback information, including consequential feedback. Consequential feedback provides vital information about the consequences of students’ actions and decisions. Typically, though, in the perennial NSS-driven hand-wringing about improving feedback in higher education, we are thinking only about evaluative feedback information – when educators or peers critique students’ work and suggest improvements.

    There’s no doubt evaluative feedback, especially corrective feedback, is important. But if we’re only talking about evaluative feedback, we are missing whole swathes of invaluable feedback information crucial to preparing graduates for professional roles.

    In a recently published, open access paper in Assessment and Evaluation in Higher Education, we make the case for educators to design for and support students in noticing, interpreting and learning from consequential feedback information.

    What’s consequential feedback?

    Consequential feedback involves noticing the connection between actions and their outcomes (consequences). For example, if we touch a hot stove, we get burned. In this example, noticing the burn is both immediate and obvious. Connecting it to the action of touching the stove is also easy – little interpretation needs to be made. However, there are many cause-effect (action-consequence) sequences embedded in professional practice that are not so easy to connect. Students may need help in noticing the linkages, interpreting them and making corrections to their actions to lead to better consequences in the future.

    For instance, the business student above might decide on a pricing strategy and observe its effect on market share. The simulation speeds up time so students can observe the effects of price change on sales and market share. In real life, observing the consequences of a pricing change might take weeks or months. Through the simulation, learners can experiment with different pricing strategies, making different assumptions about the market, and observing the effects, to build their understanding of how these two variables are linked under different conditions. Critically, they learn the importance of this linkage so they can monitor in the messier, delayed real life situations they might face as a marketing professional.

    Consequential feedback isn’t just theoretical. It is already making an impact in diverse educational fields such as healthcare, business, mathematics and the arts. But the disparate literature we reviewed almost never names this information as consequential feedback. To improve feedback in higher education, we need to be able to talk to educators and students explicitly about this rich font of feedback information. We need a language for it so we can explore how it is distinct from and complementary to evaluative feedback. Naming it allows us to deliberately practice different ways of enhancing it and build evidence about how to teach students to use it well.

    Why does it matter?

    Attending to consequential feedback shifts the focus from external judgments of quality to an internalised understanding of cause and effect. It enables students to experience the results of their decisions and use these insights to refine their practice. Thus, it forms the grist for reflective thinking and a host of twenty-first century skills needed to solve the world’s most pressing problems.

    In “real-life” after university, graduates are unlikely to have a mentor or teacher standing over them offering the kind of evaluative feedback that dominates discussion of feedback in higher education. Instead, they need to be able to learn independently from the consequential feedback readily available in the workplace and beyond. Drawing on consequential feedback information, professionals can continuously learn and adapt their practice to changing contexts. Thus, educators need to design opportunities that simulate professional practices, paying explicit attention to helping students learn from the consequential feedback afforded by these instructional designs.

    How can educators harness it?

    While consequential feedback is powerful, capitalising on it during higher education requires careful design. Here are some strategies for educators to try in their practice:

    Use simulations, role-plays, and projects: Simulations provide a controlled environment where students can explore the outcomes of their actions. For example, in a healthcare setting, students might use patient mannequins or virtual reality tools to practice diagnostic and treatment skills. In a human resources course, students might engage in mediation role plays. In an engineering course, students could design and test products like model bridges or rockets.

    Design for realism: Whenever possible, feedback opportunities should replicate real-world conditions. For instance, a law student participating in a moot court can see how their arguments hold up under cross-examination or a comedy student can see how a real audience responds to their show.

    Encourage reflection: Consequential feedback is most effective when paired with reflection. Educators can prompt students to consider questions such as: What did you do? Why? What happened when you did x? Was y what you expected or wanted? How do these results compare to professional standards? Why did you get that result? What could you change to get the results you want?

    Pair with evaluative feedback: Students may see that they didn’t get the result they wanted but not know how to correct their actions. Consequential feedback doesn’t replace evaluative feedback; it complements it. For example, after a business simulation, an instructor might provide additional guidance on interpreting KPIs or suggest strategies for improvement. This pairing helps students connect outcomes with actionable next steps.

    Shifting the frame

    Focusing on consequential feedback represents a shift in how we think about assessment, feedback, and learning itself. By designing learning experiences that allow students to act and observe the natural outcomes of their actions, we create opportunities for deeper, more meaningful engagement in the learning process. As students study the impact of their actions, they learn to take responsibility for their choices. This approach fosters the problem-solving, adaptability, independence, and professional and social responsibility they’ll need throughout their lives.

    A key question educators should be asking is: how can I help students recognise and learn from the outcomes of their actions? The answer lies in designing for and highlighting consequential feedback.

    Source link

  • Becoming a professional services researcher in HE – making the train tracks converge

    Becoming a professional services researcher in HE – making the train tracks converge

    by Charlotte Verney

    This blog builds on my presentation at the BERA ECR Conference 2024: at crossroads of becoming. It represents my personal reflections of working in UK higher education (HE) professional services roles and simultaneously gaining research experience through a Masters and Professional Doctorate in Education (EdD).

    Professional service roles within UK HE include recognised professionals from other industries (eg human resources, finance, IT) and HE-specific roles such as academic quality, research support and student administration. Unlike academic staff, professional services staff are not typically required, or expected, to undertake research, yet many do. My own experience spans roles within six universities over 18 years delivering administration and policy that supports learning, teaching and students.

    Traversing two tracks

    In 2016, at an SRHE Newer Researchers event, I was asked to identify a metaphor to reflect my experience as a practitioner researcher. I chose this image of two train tracks as I have often felt that I have been on two development tracks simultaneously –  one building professional experience and expertise, the other developing research skills and experience. These tracks ran in parallel, but never at the same pace, occasionally meeting on a shared project or assignment, and then continuing on their separate routes. I use this metaphor to share my experiences, and three phases, of becoming a professional services researcher.

    Becoming research-informed: accelerating and expanding my professional track

    The first phase was filled with opportunities; on my professional track I gained a breadth of experience, a toolkit of management and leadership skills, a portfolio of successful projects and built a strong network through professional associations (eg AHEP). After three years, I started my research track with a masters in international higher education. Studying felt separate to my day job in academic quality and policy, but the assignments gave me opportunities to bring the tracks together, using research and theory to inform my practice – for example, exploring theoretical literature underpinning approaches to assessment whilst my institution was revising its own approach to assessing resits. I felt like a research-informed professional, and this positively impacted my professional work, accelerating and expanding my experience.

    Becoming a doctoral researcher: long distance, slow speed

    The second phase was more challenging. My doctoral journey was long, taking 9 years with two breaks. Like many part-time doctoral students, I struggled with balance and support, with unexpected personal and professional pressures, and I found it unsettling to simultaneously be an expert in my professional context yet a novice in research. I feared failure, and damaging my professional credibility as I found my voice in a research space.

    What kept me going, balancing the two tracks, was building my own research support network and my researcher identity. Some of the ways I did this was through zoom calls with EdD peers for moral support, joining the Society for Research into Higher Education to find my place in the research field, and joining the editorial team of a practitioner journal to build my confidence in academic writing.

    Becoming a professional services researcher: making the tracks converge

    Having completed my doctorate in 2022, I’m now actively trying to bring my professional and research tracks together. Without a roadmap, I’ve started in my comfort-zone, sharing my doctoral research in ‘safe’ policy and practitioner spaces, where I thought my findings could have the biggest impact. I collaborated with EdD peers to tackle the daunting task of publishing my first article. I’ve drawn on my existing professional networks (ARC, JISC, QAA) to establish new research initiatives related to my current practice in managing assessment. I’ve made connections with fellow professional services researchers along my journey, and have established an online network  to bring us together.

    Key takeaways for professional services researchers

    Bringing my professional experience and research tracks together has not been without challenges, but I am really positive about my journey so far, and for the potential impact professional services researchers could have on policy and practice in higher education. If you are on your own journey of becoming a professional services researcher, my advice is:

    • Make time for activities that build your research identity
    • Find collaborators and a community
    • Use your professional experience and networks
    • It’s challenging, but rewarding, so keep going!

    Charlotte Verney is Head of Assessment at the University of Bristol. Charlotte is an early career researcher in higher education research and a leader in within higher education professional services. Her primary research interests are in the changing nature of administrative work within universities, using research approaches to solve professional problems in higher education management, and using creative and collaborative approaches to research. Charlotte advocates for making the academic research space more inclusive for early career and professional services researchers. She is co-convenor of the SRHE Newer Researchers Network and has established an online network for higher education professional services staff engaged with research.

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • Alternatives to the essay can be inclusive and authentic

    Alternatives to the essay can be inclusive and authentic

    I lead our largest optional final-year module – Crime, Justice and the Sex Industry – with 218 registered students for the 24–25 academic year.

    That is a lot of students to assess.

    For that module in the context, I was looking for an assessment that is inclusive, authentic, and hopefully enjoyable to write.

    I also wanted to help make students into confident writers, who make writing a regular practice with ongoing revisions.

    Inspired by the wonderful Katie Tonkiss at Aston University, I devised a letter assessment for our students.

    This was based on many different pedagogical considerations, and the acute need to teach students how to hold competing and conflicting harms and needs in tension. I consider the sex industry within the broader context of violence against women and girls.

    The sex industry, sexual exploitation, and violence against women in girls are brutal and traumatic topics that can incite divisive responses.

    Now more than ever, we need to be able to deal well with differences, to negotiate, to encourage, to reflect, and to try and move discussions forward, as opposed to shutting them down.

    Their direction and pace

    During the pandemic, I designed my module based on a non-linear pedagogy – giving students the power to navigate teaching resources at a direction and pace of their choosing.

    This has strong EDI principles, and was strongly led by my own dyslexia. I recognised that students often have constraints on their time and energy levels that mean they need to engage with learning in different patterns during different weeks – disclosing that they “binge watch” lecture videos, podcasts, or focus heavily on texts during certain weeks to block out their time.

    The approach also honours the principles of trauma-informed teaching, empowering students to navigate sensitive topics of gender-based and sexual violence.

    As I argue here with Lisa Anderson, students are now learning in a post-pandemic context with differing expectations, accessibility needs, and barriers including paid work responsibilities.

    The “full-time student” is now something of an anachronism, and education must meet this new reality – there are now more students in paid employment than not according to the 2023 HEPI/Advance HE Student Academic Experience Study.

    We have to meet students where they are, and, presently, that is in a difficult place as many students struggle with the cost of living and the battle to “have it all”.

    Students may not be asked to write exam answers or essays in their post-university life, but they will certainly be expected to write persuasively, convincingly, engaging with multiple viewpoints, and sitting with their own discomfort.

    This may take the form of webpage outputs, summaries, policy briefings, strategy documents, emails to stakeholders, campaign materials. As such, students are strongly encouraged to think about the letter from day one of semester, and consider who their recipient will be.

    They are told that it is easier to write such a letter to somebody with an opposing viewpoint – laying out your case in a respectful, warm and supportive way to try and progress the discussion. Students are also encouraged to acknowledge their own positionality, and share this if desirable, including if they can identify a thinker, document or moment that changed their position.

    Working towards change

    An example is a student who holds a position influenced by their faith, writing their letter to a faith leader or family member, acknowledging that they respect their beliefs, but strongly endorsing an approach that places harm-reduction and safety first. Finding a place of agreement and building from there, and accepting that working towards change can be a long process.

    Another example is a student who holds sex industry abolitionist views, writing to a sex worker, expressing concern and solidarity with the multiple forms of harm, stigma and violence they have experienced, including institutional violence.

    They consider how the law itself facilitates the context that makes violence more likely to occur. This is particularly pertinent at the moment as we experience a fresh wave of digital “me too” and high-profile cases of sexual violence and victim-blaming.

    In this way, students are taught to examine different documents and evidence, from legal, policy, charity briefings and statements, journal articles, books, reports, documentaries, global sex worker grassroots initiatives, news reports, social media campaigns and footage, art, literature, etc.

    By engaging with different types of sources, we challenge the idea that academic material is top of the knowledge hierarchy, and platform the voices who often go unheard, including sex workers globally.

    The students cross-reference resources, and identify forms of harm, violence and discrimination that may not make official narratives. This also encourages students to be active members of our community, contributing to each workshop either verbally or digitally, in real-time, or asynchronously via our class-wide google doc.

    Students are also taught that it is OK to not have the definitive answer, and to instead ask the recipient to help them further their knowledge. They are also taught that it is ok to change our position and recommendations depending on what evidence we encounter.

    Above all, they are taught that two things can be true at the same time: something might be harmful, and the response to it awful too.

    Students responded overwhelmingly in favour of this approach, and many expressed a new-found love of writing, and reading. Engaging with many different mediums including podcasts, tweets, reels, history talks, art exhibitions, gave them confidence in their reading and study skills.

    Putting choice and enjoyment in the curriculum is not about “losing academic rigour”, it is about firing students up for their topics of study, and ensuring they can communicate powerfully to different audiences using different tools.

    Dear me, I wish we had tried this assessment sooner. xoxo

    Source link