Tag: Assessment

  • Student-created book reviews inspire a global reading culture

    Student-created book reviews inspire a global reading culture

    Key points:

    When students become literacy influencers, reading transforms from a classroom task into a global conversation.

    When teens take the mic

    Recent studies show that reading for pleasure among teens is at an all-time low. According to the National Assessment of Educational Progress (NAEP), only 14 percent of U.S. students read for fun almost every day–down from 31 percent in 1984. In the UK, the National Literacy Trust reports that just 28 percent of children aged 8 to 18 said they enjoyed reading in their free time in 2023.

    With reading engagement in crisis, one group of teens decided to flip the narrative–by turning on their cameras. What began as a simple classroom project to encourage reading evolved into a movement that amplified student voices, built confidence, and connected learners across cultures.

    Rather than writing traditional essays or book reports, my students were invited to create short video book reviews of their favorite titles–books they genuinely loved, connected with, and wanted others to discover. The goal? To promote reading in the classroom and beyond. The result? A library of student-led recommendations that brought books–and readers–to life.

    Project overview: Reading, recording, and reaching the world

    As an ESL teacher, I’ve always looked for ways to make literacy feel meaningful and empowering, especially for students navigating a new language and culture. This video review project began with a simple idea: Let students choose a book they love, and instead of writing about it, speak about it. The assignment? Create a short, personal, and authentic video to recommend the book to classmates–and potentially, to viewers around the world.

    Students were given creative freedom to shape their presentations. Some used editing apps like Filmora9 or Canva, while others recorded in one take on a smartphone. I offered a basic outline–include the book’s title and author, explain why you loved it, and share who you’d recommend it to–but left room for personal flair.

    What surprised me most was how seriously students took the project. They weren’t just completing an assignment–they were crafting their voices, practicing communication skills, and taking pride in their ability to share something they loved in a second language.

    Student spotlights: Book reviews with heart, voice, and vision

    Each student’s video became more than a book recommendation–it was an expression of identity, creativity, and confidence. With a camera as their platform, they explored their favorite books and communicated their insights in authentic, impactful ways.

    Mariam ElZeftawy: The Fault in Our Stars by John Green
    Watch Miriam’s Video Review

    Mariam led the way with a polished and emotionally resonant video review of John Green’s The Fault in Our Stars. Using Filmora9, she edited her video to flow smoothly while keeping the focus on her heartfelt reflections. Mariam spoke with sincerity about the novel’s themes: love, illness, and the fragility of life. She communicated them in a way that was both thoughtful and relatable. Her work demonstrated not only strong literacy skills but also digital fluency and a growing sense of self-expression.

    Dana: Dear Tia by Maria Zaki
    Watch Dana’s Video Review

    In one of the most touching video reviews, Dana, a student who openly admits she’s not an avid reader, chose to spotlight “Dear Tia,” written by Maria Zaki, her best friend’s sister. The personal connection to the author didn’t just make her feel seen; it made the book feel more real, more urgent, and worth talking about. Dana’s honest reflection and warm delivery highlight how personal ties to literature can spark unexpected enthusiasm.

    Farah Badawi: Utopia by Ahmed Khaled Towfik
    Watch Farah’s Video Review

    Farah’s confident presentation introduced her classmates to Utopia, a dystopian novel by Egyptian author Ahmed Khaled Towfik. Through her review, she brought attention to Arabic literature, offering a perspective that is often underrepresented in classrooms. Farah’s choice reflected pride in her cultural identity, and her delivery was clear, persuasive, and engaging. Her video became more than a review–it was a form of cultural storytelling that invited her peers to expand their literary horizons.

    Rita Tamer: Frostblood
    Watch Rita’s Video Review

    Rita’s review of Frostblood, a fantasy novel by Elly Blake, stood out for its passionate tone and concise storytelling. She broke down the plot with clarity, highlighting the emotional journey of the protagonist while reflecting on themes like power, resilience, and identity. Rita’s straightforward approach and evident enthusiasm created a strong peer-to-peer connection, showing how even a simple, sincere review can spark curiosity and excitement about reading.

    Literacy skills in action

    Behind each of these videos lies a powerful range of literacy development. Students weren’t just reviewing books–they were analyzing themes, synthesizing ideas, making connections, and articulating their thoughts for an audience. By preparing for their recordings, students learned how to organize their ideas, revise their messages for clarity, and reflect on what made a story impactful to them personally.

    Speaking to a camera also encouraged students to practice intonation, pacing, and expression–key skills in both oral language development and public speaking. In multilingual classrooms, these skills are often overlooked in favor of silent writing tasks. But in this project, English Learners were front and center, using their voices–literally and figuratively–to take ownership of language in a way that felt authentic and empowering.

    Moreover, the integration of video tools meant students had to think critically about how they presented information visually. From editing with apps like Filmora9 to choosing appropriate backgrounds, they were not just absorbing content, they were producing and publishing it, embracing their role as creators in a digital world.

    Tips for teachers: Bringing book reviews to life

    This project was simple to implement and required little more than student creativity and access to a recording device. Here are a few tips for educators who want to try something similar:

    • Let students choose their own books: Engagement skyrockets when they care about what they’re reading.
    • Keep the structure flexible: A short outline helps, but students thrive when given room to speak naturally.
    • Offer tech tools as optional, not mandatory: Some students enjoyed using Filmora9 or Canva, while others used the camera app on their phone.
    • Focus on voice and message, not perfection: Encourage students to focus on authenticity over polish.
    • Create a classroom premiere day: Let students watch each other’s videos and celebrate their peers’ voices.

    Literacy is personal, public, and powerful

    This project proved what every educator already knows: When students are given the opportunity to express themselves in meaningful ways, they rise to the occasion. Through book reviews, my students weren’t just practicing reading comprehension, they were becoming speakers, storytellers, editors, and advocates for literacy.

    They reminded me and will continue to remind others that when young people talk about books in their own voices, with their personal stories woven into the narrative, something beautiful happens: Reading becomes contagious.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    Otus Wins Gold Stevie® Award for Customer Service Department of the Year

    CHICAGO, IL (GLOBE NEWSWIRE) — Otus, a leading provider of K-12 student data and assessment solutions, has been awarded a prestigious Gold Stevie® Award in the category of Customer Service Department of the Year at the 2025 American Business Awards®. This recognition celebrates the company’s unwavering commitment to supporting educators, students, and families through exceptional service and innovation.

    In addition to the Gold award, Otus also earned two Silver Stevie® Awards: one for Company of the Year – Computer Software – Medium Size, and another honoring Co-founder and President Chris Hull as Technology Executive of the Year.

    “It is an incredible honor to be recognized, but the real win is knowing our work is making a difference for educators and students,” said Hull. “As a former teacher, I know how difficult it can be to juggle everything that is asked of you. At Otus, we focus on building tools that save time, surface meaningful insights, and make student data easier to use—so teachers can focus on what matters most: helping kids grow.”

    The American Business Awards®, now in their 23rd year, are the premier business awards program in the United States, honoring outstanding performances in the workplace across a wide range of industries. The competition receives more than 12,000 nominations every year. Judges selected Otus for its outstanding 98.7% customer satisfaction with chat interactions, and exceptional 89% gross retention in 2024. They also praised the company’s unique blend of technology and human touch, noting its strong focus on educator-led support, onboarding, data-driven product evolution, and professional development.

    “We believe great support starts with understanding the realities educators face every day. Our Client Success team is largely made up of former teachers and school leaders, so we speak the same language. Whether it’s during onboarding, training, or day-to-day communication, we’re here to help districts feel confident and supported. This recognition is a reflection of how seriously we take that responsibility and energizes us to keep raising the bar,” said Phil Collins, Ed.D., Chief Customer Officer at Otus.

    Otus continues to make significant strides in simplifying teaching and learning by offering a unified platform that integrates assessment, data, and instruction—all in one place. Otus has supported over 1 million students nationwide by helping educators make data-informed decisions, monitor progress, and personalize learning. These honors reflect the company’s growth, innovation, and steadfast commitment to helping school communities succeed.

    About Otus

    Otus, an award-winning edtech company, empowers educators to maximize student performance with a comprehensive K-12 assessment, data, and insights solution. Committed to student achievement and educational equity, Otus combines student data with powerful tools that provide educators, administrators, and families with the insights they need to make a difference. Built by teachers for teachers, Otus creates efficiencies in data management, assessment, and progress monitoring to help educators focus on what matters most—student success. Today, Otus partners with school districts nationwide to create informed, data-driven learning environments. Learn more at Otus.com.

    Stay connected with Otus on LinkedIn, Facebook, X, and Instagram.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)



    Source link

  • New (old) models of teaching and assessment

    New (old) models of teaching and assessment

    On the face of it, saying that if we stopped teaching we would not need examinations sounds crazy.

    But it is not so hard to think of examples of rigorous assessment that do not entail examinations in the sense of written responses to a set of predetermined questions.

    For example, institutions regularly award PhDs to candidates who successfully demonstrate their grasp of a subject and associated skills, without requiring them to sit a written examination paper. The difference of course is that PhD students are not taught a fixed syllabus.

    The point of a PhD thesis is to demonstrate a unique contribution to knowledge of some kind. And as it is unique then it is not possible to set examination questions in advance to test it.

    What are we trying to assess?

    If written examinations are inappropriate for PhDs, then why are they the default mode of assessment for undergraduate and taught postgraduate students? The clue, of course, is in the word “taught”. If the primary intended learning outcomes of a course of study require all students to acquire the same body of knowledge and skills, as taught in the course, to the same level, then written examinations are a logical and efficient institutional response.

    But surely what we want as students, teachers, employers, professional bodies and funding bodies is graduates who are not just able to reproduce old knowledge and select solutions to a problem from a repertoire of previously learned responses? So why does so much undergraduate and postgraduate education emphasise teaching examinable knowledge and skills rather than developing more autonomous learners capable of constructing their own knowledge?

    It is not true that learners lack the motivation and ability to be autodidacts – the evidence of my young grandchildren acquiring complex cognitive skills (spoken language) and of motor abilities (walking and running) suggests we have all done it in the past. And the comprehensive knowledge of team players and team histories exhibited by football fans, and the ease and confidence with which some teenagers can strip down and reassemble a motorcycle engine suggest that autodidacticism is not confined to our early years.

    An example from design

    Is it feasible, practical or economic to run courses that offer undergraduates an educational framework within which to pursue and develop personal learning goals, akin to a PhD, but at a less advanced level? In this case, my own experience suggests we can. I studied at undergraduate level for four years, at the end of which I was awarded an honours degree. During the entire four years there were no written examinations.

    I was just one of many art and design students following programmes of study regulated and approved at a national level in the UK by the Council for National Academic Awards (CNAA).

    According to the QAA Art and Design subject benchmark statement:

    Learning in art and design stimulates the development of an enquiring, analytical and creative approach, and develops entrepreneurial capabilities. It also encourages the acquisition of independent judgement and critical self-awareness. Commencing with the acquisition of an understanding of underlying principles and appropriate knowledge and skills, students normally pursue a course of staged development progressing to increasingly independent learning.

    Of course some of the “appropriate knowledge and skills” referred to are subject specific, for example sewing techniques, material properties and history of fashion for creating fashion designs; properties of materials, industrial design history and machining techniques for product design; digital image production and historic stylistic trends in illustration and advertising for graphic design, and so on.

    Each subject has its own set of techniques and knowledge, but a lot of what students learn is determined by lines of enquiry selected by students themselves in response to design briefs set by course tutors. To be successful in their study they must learn to operate with a high degree of independence and self-direction, in many ways similar to PhD students.

    Lessons without teaching

    This high degree of independence and self-direction as learners has traditionally been fostered through an approach that differs crucially from the way most other undergraduate courses are taught.

    Art and design courses are organised around a series of questions or provocations called design briefs that must be answered, rather than around a series of answers or topics that must be learned. The learning that takes place is a consequence of activities undertaken by students to answer the design brief. Answers to briefs generated by art and design students still have to be assessed of course, but because the formal taught components (machining techniques, material properties, design history, etc.) are only incidental to the core intended learning outcomes (creativity, exploration, problem solving) then written examinations on these topics would be only marginally relevant.

    What is more important on these courses is what students have learned rather than what they have been taught, and a lot of what they have learned has been self-taught, albeit through carefully contrived learning activities and responsive guidance from tutors to scaffold the learning. Art and design students learn how to present their work for assessment through presentation of the designed artefact (ie. “the answer”), supported by verbal, written and illustrated explanations of the rationale for the final design and the development process that produced it, often shared with their peers in a discussion known as a “crit” (critique). Unlike written examinations, this assessment process is an authentic model of how students’ work will be judged in their future professional practice. It thus helps to develop important workplace skills.

    Could it work for other subjects?

    The approach to art and design education described here has been employed globally since the mid-twentieth century. However, aspects of the approach are evident in other subject domains, variously called “problem based learning”, “project based learning” and “guided discovery learning”. It has been successfully deployed in medical education but also in veterinary sciences, engineering, nursing , mathematics, geography and others. So why are traditional examinations still the de facto approach to assessment across most higher education disciplines and institutions?

    One significant barrier to adoption is the high cost of studio-based teaching at a time when institutions are under pressure to increase numbers while reducing costs. The diversity of enquiries initiated by art and design students responding to the same design brief requires high levels of personalised learning support, varied resources and diversity of staff expertise.

    Is now the time?

    As with other subjects, art and design education has been under attack from the combined forces of politics and market economics . In the face of such trends it might be considered naive to suggest that such an approach should be adopted more widely rather than less. But although these pressures to reduce costs and increase conformity will likely continue and accelerate in the future, there is another significant force at play now in the form of generative AI tools.

    These have the ability to write essays, but they can also suggest template ideas, solve maths problems, and generate original images from a text prompt, all in a matter of seconds. It is possible now to enter an examination question into one of several widely available online generative AIs and to receive a rapid response that is detailed, knowledgeable and plausible (if not always entirely accurate). If anyone is in any doubt about the ability of the current generation of AIs to generate successful examination question answers then the record of examinations passed by ChatGPT will be sobering reading.

    It is possible that a shift from asking questions, (answers to which the questioner already knows), to presenting learners with authentic problems that assess ability to present, explain, and justify their responses – is a way through the concerns that AI generated responses to other assessment forms present.

    Source link

  • From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    Source link

  • However: the curriculum and assessment review

    However: the curriculum and assessment review

    • Professor Sir Chris Husbands was Vice-Chancellor of Sheffield Hallam University between 2016 and 2023 and is now a Director of  Higher Futures, working with university leaders to lead sustainable solutions to institutional challenges.

    Almost everyone has views on the school curriculum. It’s too academic; it’s not academic enough; it’s too crowded; it has major omissions; it’s too subject-dominated; it doesn’t spend enough time on subject depth.  Debates about the curriculum can be wearying: just as everyone has a view on the school curriculum, so almost everyone has views about what should be added to it, though relatively few people have equally forceful ideas about what should be dropped to make room for Latin, or personal finance education, or more civic education and so on.

    One of the achievements of Becky Francis’s interim report on school curriculum and assessment is that it tries to turn most of these essentially philosophical (or at least opinionated) propositions into debates about evidence and effectiveness and to use those conclusions to set out a route to more specific recommendations which will follow later in the year. It’s no small achievement.  As the report says, and as Becky has maintained in interviews, ‘all potential reforms come with trade-offs’ (p 8); the key is to be clear about the nature of those trade-offs so that there can be an open, if essentially political debate about how to weight them.

    The methodology adopted by Becky and her panel points towards an essentially evolutionary approach for both curriculum and assessment reform.  The first half of that quoted sentence on trade-offs is an assertion that ‘our system is not perfect’ (p 8) and of course, no system is. But the report is largely positive about key building blocks of the system, and it proposes that they will remain: the structure of four key stages, which has been in place since the 1980s; the early focus on phonics as the basis of learning to read, which has been a focus of policy since the 2000s; the knowledge-rich, subject-based approach which has been in place for the last decade; and the essentials of the current assessment arrangements with formal testing at the end of Key Stage 2 (age 11), key stage 4 (essentially GCSEs) and post-16 which were established in the 1988 Education Reform Act.

    More directly relevant to higher education, the report’s view is that ‘the A level route is seen as strong, well-respected and widely recognised, and facilitates progression to higher education’ (p 30) and that ‘A-levels provide successful preparation for a three-year degree’ (p 7).  Whilst the review talks about returning to assess ‘whether there are opportunities to reduce the overall volume of assessment at key stage 4’ (p 41), it does not propose doing so for A-level. The underlying message is one of system stability, because ‘many aspects of the current system are working well’ (p 5).

    However: one of the most frequently used words in the interim report is, in fact, ‘however’: the word appears 29 times on 37 pages of body text, and that doesn’t include synonyms including ‘but’ (32 appearances), ‘while’ (19 appearances) and a single ‘on the other hand’.  Frequently, ‘however’ is used to undercut an initial judgement. The national curriculum has been a success (p 17),'[h]owever, excellence is not yet provided for all: persistent gaps remain’, The panel “share the widely held ambition to promote high standards. However, in practice, “high standards” currently too often means ‘high standards for some’”(p 5).

    These ‘however’ formulations have three effects: first, and not unreasonably in an interim report, they defer difficult questions for the final report.  The final report promises deep dives ‘to diagnose each subject’s specific issues and explore and test a range of solutions’, and ‘about the specificity, relevance, volume and diversity of content’ (p.42). It’s this which will prove very tough for the panel, because it is always detail which challenges in curriculum change. If the curriculum as a whole is always a focus for energetic debate, individual subjects and their structure invariably arouse very strong passions. The report sets up a future debate here about teacher autonomy, arguing, perhaps controversially in an implied ‘however’ that ‘lack of specificity can, counter-intuitively, contribute to greater curriculum volume, as teachers try to cover all eventualities’ (p 28). 

    Secondly, and in almost every case, the ‘however’ undercuts the positive systems judgement: ‘the system is broadly working well, and we intend to retain the mainstay of existing arrangements. However, there are opportunities for improvement’ (p 8).  It’s a repeated rhetorical device which plays both to broad stability and the need for extensive change, and it suggests that some of the technical challenges are going to rest on value – and so political – judgements about how to balance the competing needs of different groups. Sometimes the complexity of those interests overwhelms the systems judgements. The review’s intention is to return to 16-19 questions, “with the aim of building on the successes of existing academic and technical pathways, particularly considering [possibly another implied ‘however’] how best to support learners who do not study A levels or T Levels” (p 9) is right to focus on the currently excluded, but the problem is often mapping a route through overly rigid structures.

    The qualifications system has been better geared for higher attainers, perhaps exemplified by the EBacc [English Baccalaureate] of conventional academic subjects.  Although the Panel cites evidence that a portfolio of academic subjects aids access to higher education, ‘there is little evidence to suggest that the EBacc combination [of subjects] per se has driven better attendance to Russell Group universities’ (p 24) – the latter despite the rapid growth of high tariff universities’ market share over recent years. This issue is linked to one of the most curious aspects of the report from an evidential point of view.  It is overwhelmingly positive about T-levels, ‘a new, high-quality technical route for young people who are clear about their intended career destination’ which ‘show great promise’ (p 7). But (“however”) take up (2% of learners) has been very poor, and not just because not all 16-year-olds are ‘clear about their intended career pathway’.   The next phase of the Review promises to  ‘look at how we can achieve the aim of a simpler, clearer offer which provides strong academic and technical/vocational pathways for all’ (p 31).  But that ‘simpler, clearer offer’ has defied either technical design or political will for a very long time. If it is to succeed, the review will need to consider approaches which allow combinations of vocational and academic qualifications at 16-19, partly because much higher education is both vocational and academic and more because at age 16, most learners do not have an ‘intended career pathway’.

    And thirdly, related to that, the ‘howevers’ unveil a theme which looms over the report, the big challenge for national reform which seeks to deliver excellence for all. Pulling evidence together from across the report tells us that 80% of pupils met the expected standard in the phonics screening check and at age 11, 61% of pupils achieved the expected standards in reading, writing and maths (p 17). Some 40% of young people did not achieve level 2 (a grade 4 or above at GCSE) in English and maths by age 16 (p 30). To simplify: attainment gaps open early; they are not closed by the curriculum and assessment system, and one of the few graphs in the report (p 18) suggests that they are widening, leaving behind a large minority of learners who struggle to access a qualifications system which is not working for them.  As the report says, the requirement to repeat GCSE English and Maths has been especially problematic.  

    The report is thorough, technical and thoughtful; it is evolutionary not revolutionary, and none the worse for that. Curriculum and assessment policy is full of interconnection and unintended consequences.  There are tough challenges in system design to secure excellence and equity, inclusion and attainment, and to address those ‘howevers’. The difficult decisions have been left for the final report. 

    Source link

  • Cheating matters but redrawing assessment “matters most”

    Cheating matters but redrawing assessment “matters most”

    Conversations over students using artificial intelligence to cheat on their exams are masking wider discussions about how to improve assessment, a leading professor has argued.

    Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, argued that “validity matters more than cheating,” adding that “cheating and AI have really taken over the assessment debate.”

    Speaking at the conference of the U.K.’s Quality Assurance Agency, he said, “Cheating and all that matters. But assessing what we mean to assess is the thing that matters the most. That’s really what validity is … We need to address it, but cheating is not necessarily the most useful frame.”

    Dawson was speaking shortly after the publication of a survey conducted by the Higher Education Policy Institute, which found that 88 percent of U.K. undergraduates said they had used AI tools in some form when completing assessments.

    But the HEPI report argued that universities should “adopt a nuanced policy which reflects the fact that student use of AI is inevitable,” recognizing that chat bots and other tools “can genuinely aid learning and productivity.”

    Dawson agreed, arguing that “assessment needs to change … in a world where AI can do the things that we used to assess,” he said.

    Referencing—citing sources—may be a good example of something that can be offloaded to AI, he said. “I don’t know how to do referencing by hand, and I don’t care … We need to take that same sort of lens to what we do now and really be honest with ourselves: What’s busywork? Can we allow students to use AI for their busywork to do the cognitive offloading? Let’s not allow them to do it for what’s intrinsic, though.”

    It was a “fantasy land” to introduce what he called “discursive” measures to limit AI use, where lecturers give instructions on how AI use may or may not be permitted. Instead, he argued that “structural changes” were needed for assessments.

    “Discursive changes are not the way to go. You can’t address this problem of AI purely through talk. You need action. You need structural changes to assessment [and not just a] traffic light system that tells students, ‘This is an orange task, so you can use AI to edit but not to write.”

    “We have no way of stopping people from using AI if we aren’t in some way supervising them; we need to accept that. We can’t pretend some sort of guidance to students is going to be effective at securing assessments. Because if you aren’t supervising, you can’t be sure how AI was or wasn’t used.”

    He said there are three potential outcomes for the impact on grades as AI develops: grade inflation, where people are going to be able to do “so much more against our current standards, so things are just going to grow and grow”; and norm referencing, where students are graded on how they perform compared to other students.

    The final option, which he said was preferable, was “standards inflation,” “where we just have to keep raising the standards over time, because what AI plus a student can do gets better and better.”

    Over all, the impact of AI on assessments is fundamental, he said, adding, “The times of assessing what people know are gone.”

    Source link

  • Achieving a 100% Completion Rate for Student Assessment at the University of Charleston

    Achieving a 100% Completion Rate for Student Assessment at the University of Charleston

    Seated in beautiful Charleston, West Virginia, the University of Charleston (UC) boasts “a unique opportunity for those who want an exceptional education in a smaller, private setting.” UC provides a unique student experience focused on retention and student success even before students arrive on campus.

    Students are offered an opportunity to complete the College Student Inventory (CSI) online through a pre-orientation module. This initiative is reinforced through the student’s Success and Motivation first-year course. University instructors serve as mentors, utilizing the CSI results to capitalize on insights related to each individual student’s strengths and opportunities for success through individual review meetings and strategic support and skill building structured within this course.

    After achieving a 7% increase in retention, Director of Student Success and First-Year Programs Debbie Bannister says administering the CSI each year is non-negotiable. Additionally, the campus has refocused on retention, emphasizing, “Everyone has to realize that they are part of retention, and they’re part of keeping every single student on our campus.”

    UC has reinstated a Retention Committee that utilizes summary information from the CSI to understand the needs of its students. Of particular concern, UC notes that the transfer portal has created additional challenges with upperclassmen, so including a representative from the athletic department on the retention committee has been crucial.

    Through this focus on retention and strong implementation strategy, UC achieves a 100% completion rate for the CSI for their first-year student cohort. Building off the scaffolding support from early support meetings related to the CSI insights, first-year instructors are able to refer back to reinforce articulated support strategies and goals throughout the first-year experience. The structure and progression through this course reiterates college preparation skills and resources building motivation and a growth mindset to persist through college.

    Increase student success through early intervention

    Join institutions such as the University of Charleston by using the College Student Inventory with your incoming students. More than 1,400 institutions have used the CSI, and it’s been taken by more than 2.6 million students nationwide. Learn more about how you can use it to intervene earlier with students and increase student yield.

    Source link

  • Another way of thinking about the national assessment of people, culture, and environment

    Another way of thinking about the national assessment of people, culture, and environment

    There is a multi-directional relationship between research culture and research assessment.

    Poor research assessment can lead to poor research cultures. The Wellcome Trust survey in 2020 made this very clear.

    Assessing the wrong things (such as a narrow focus on publication indicators), or the right things in the wrong way (such as societal impact rankings based on bibliometrics) is having a catalogue of negative effects on the scholarly enterprise.

    Assessing the assessment

    In a similar way, too much research assessment can also lead to poor research cultures. Researchers are one of the most heavily assessed professions in the world. They are assessed for promotion, recruitment, probation, appraisal, tenure, grant proposals, fellowships, and output peer review. Their lives and work are constantly under scrutiny, creating competitive and high-stress environments.

    But there is also a logic (Campbell’s Law) that tells us that if we assess research culture it can lead to greater investment into improving it. And it is this logic that the UK Joint HE funding bodies have drawn on in their drive to increase the weighting given to the assessment of People, Culture & Environment in REF 2029. This makes perfect sense: given the evidence that positive and healthy research cultures are a thriving element of Research Excellence, it would be remiss of any Research Excellence Framework not to attempt to assess, and therefore incentivise them.

    The challenge we have comes back to my first two points. Even assessing the right things, but in the wrong way, can be counterproductive, as may increasing the volume of assessment. Given research culture is such a multi-faceted concept, the worry is that the assessment job will become so huge that it quickly becomes burdensome, thus having a negative impact on those research cultures we want to improve.

    It ain’t what you do, it’s the way that you do it

    Just as research culture is not so much about the research that you do but the way that you do it, so research culture assessment should concern itself not so much with the outcomes of that assessment but with the way the assessment takes place.

    This is really important to get right.

    I’ve argued before that research culture is a hygiene factor. Most dimensions of culture relate to standards that it’s critically important we all get right: enabling open research, dealing with misconduct, building community, supporting collaboration, and giving researchers the time to actually do research. These aren’t things for which we should offer gold stars but basic thresholds we all should meet. And to my mind they should be assessed as such.

    Indeed this is exactly how the REF assessed open research in 2021 (and will do so again in 2029). They set an expectation that 95 per cent of qualifying outputs should be open access, and if you failed to hit the threshold, excess closed outputs were simply unclassified. End of. There were no GPAs for open access.

    In the tender for the PCE indicator project, the nature of research culture as a hygiene factor was recognised by proposing “barrier to entry” measures. The expectation seemed to be that for some research culture elements institutions would be expected to meet a certain threshold, and if they failed they would be ineligible to even submit to REF.

    Better use of codes of practice

    This proposal did not make it into the current PCE assessment pilot. However, the REF already has a “barrier to entry” mechanism, of course, which is the completion of an acceptable REF Code of Practice (CoP).

    An institution’s REF CoP is about how they propose to deliver their REF, not how they deliver their research (although there are obvious crossovers). And REF have distinguished between the two in their latest CoP Policy module governing the writing of these codes.

    But given that REF Codes of Practice are now supposed to be ongoing, living documents, I don’t see why they shouldn’t take the form of more research-focussed (rather than REF-focussed) codes. It certainly wouldn’t harm research culture if all research performing organisations had a thorough research code of practice (most do of course) and one that covers a uniform range of topics that we all agree are critical to good research culture. This could be a step beyond the current Terms & Conditions associated with QR funding in England. And it would be a means of incentivising positive research cultures without ‘grading’ them. With your REF CoP, it’s pass or fail. And if you don’t pass first time, you get another attempt.

    Enhanced use of culture and environment data

    The other way of assessing culture to incentivise behaviours without it leading to any particular rating or ranking is to simply start collecting & surfacing data on things we care about. For example, the requirement to share gender pay gap data and to report misconduct cases, has focussed institutional minds on those things without there being any associated assessment mechanism. If you check out the Higher Education Statistics Agency (HESA) data on proportion of male:female professors, in most UK institutions you can see the ratio heading in the right direction year on year. This is the power of sharing data, even when there’s no gold or glory on offer for doing so.

    And of course, the REF already has a mechanism to share data to inform, but not directly make an assessment, in the form of ’Environment Data’. In REF 2021, Section 4 of an institution’s submission was essentially completed for them by the REF team by extracting from the HESA data, the number of doctoral degrees awarded (4a) and the volume of research income (4b); and from the Research Councils, the volume of research income in kind (4c).

    This data was provided to add context to environment assessments, but not to replace them. And it would seem entirely sensible to me that we identify a range of additional data – such as the gender & ethnicity of research-performing staff groups at various grades – to better contextualise the assessment of PCE, and to get matters other than the volume of research funding up the agendas of senior university committees.

    Context-sensitive research culture assessment

    That is not to say that Codes of Practice and data sharing should be the only means of incentivising research culture of course. Culture was a significant element of REF Environment statements in 2021, and we shouldn’t row back on it now. Indeed, given that healthy research cultures are an integral part of research excellence, it would be remiss not to allocate some credit to those who do this well.

    Of course there are significant challenges to making such assessments robust and fair in the current climate. The first of these is the complex nature of research culture – and the fact that no framework is going to cover every aspect that might matter to individual institutions. Placing boundaries around what counts as research culture could mean institutions cease working on agendas that are important to them, because they ostensibly don’t matter to REF.

    The second challenge is the severe and uncertain financial constraints currently faced by the majority of UK HEIs. Making the case for a happy and collaborative workforce when half are facing redundancy is a tough ask. A related issue here is the hugely varying levels of research (culture) capital across the sector as I’ve argued before. Those in receipt of a £1 million ‘Enhancing Research Culture’ fund from Research England, are likely to make a much better showing than those doing research culture on a shoe-string.

    The third is that we are already half-way through this assessment period and we’re only expected to get the final guidance in 2026 – two years prior to submission. And given the financial challenges outlined above, this is going to make this new element of our submission especially difficult. It was partly for this reason that some early work to consider the assessment of research culture was clear that this should celebrate the ‘journey travelled’, rather than a ‘destination achieved’.

    For this reason, to my mind, the only thing we can reasonably expect all HEIs to do right now with regards to research culture is to:

    • Identify the strengths and challenges inherent within your existing research culture;
    • Develop a strategy and action plan(s) by which to celebrate those strengths and address those challenges;
    • Agree a set of measures by which to monitor your progress against your research culture ambitions. These could be inspired by some of the suggestions resulting from the Vitae & Technopolis PCE workshops & Pilot exercise;
    • Describe your progress against those ambitions and measures. This could be demonstrated both qualitatively and quantitatively, through data and narratives.

    Once again, there is an existing REF assessment mechanism open to us here, and that is the use of the case study. We assess research impact by effectively asking HEIs to tell us their best stories – I don’t see why we shouldn’t make the same ask of PCE, at least for this REF.

    Stepping stone REF

    The UK joint funding bodies have made a bold and sector-leading move to focus research performing organisations’ attention on the people and cultures that make for world-leading research endeavours through the mechanism of assessment. Given the challenges we face as a society, ensuring we attract, train, and retain high quality research talent is critical to our success. However, the assessment of research culture has the power both to make things better or worse: to incentivise positive research cultures or to increase burdensome and competitive cultures that don’t tackle all the issues that really matter to institutions.

    To my mind, given the broad range of topics that are being worked on by institutions in the name of improving research culture, and where we are in the REF cycle, and the financial constraints facing the sector, we might benefit from a shift in the mechanisms proposed to assess research culture in 2029 and to see this as a stepping stone REF.

    Making better use of existing mechanisms such as a Codes of Practice and Environment and Culture data would assess the “hygiene factor” elements of culture without unhelpfully associating any star ratings to them. Ratings should be better applied to the efforts taken by institutions to understand, plan, monitor, and demonstrate progress against their own, mission-driven research culture ambitions. This is where the real work is and where real differentiations between institutions can be made, when contextually assessed. Then, in 2036, when we can hope that the sector will be in a financially more stable place, and with ten years of research culture improvement time behind us, we can assess institutions against their own ambitions, as to whether they are starting to move the dial on this important work.

    Source link

  • Direct and Indirect Assessment Measures of Student Learning in Higher Education – Faculty Focus

    Direct and Indirect Assessment Measures of Student Learning in Higher Education – Faculty Focus

    Source link

  • How to Implement Diagnostic Assessment Examples

    How to Implement Diagnostic Assessment Examples

    Diagnostic assessment examples can help ground the concepts of diagnostic assessment in education. Plus, these examples can be an effective tool in gauging student progress and comprehension of course concepts. 

    What is Diagnostic Assessment in Education?

    An essential part of course planning for instructors to consider is how to gauge student understanding of course concepts. At the beginning of each academic term, it’s important to consider the upcoming curriculum, and how to best assess students.

    Diagnostic assessment typically takes place at the start of a semester to evaluate a student’s current level of knowledge and skills and their strengths and weaknesses on a particular topic.

    Similar to ipsative assessments, where professors examine students’ prior work in order to assess their current knowledge and abilities, diagnostic assessments are a type of “assessment as learning.” This is distinct from “assessments of learning” or “assessments for learning.”

    Distinction Between Different Types of Assessments

    Assessment for learning, also known as formative assessments, make use of information about student progress to improve and support student learning and guide instructional strategies. They are generally instructor-driven but are for student and instructor use. Assessments for learning can occur throughout the teaching and learning process, using a variety of platforms and tools. They engage instructors in providing differentiated instruction and provide feedback to students to enhance their learning.

    Assessment as learning (formative assessment) involves active student reflection on learning, monitoring of their own progress by supporting students to critically analyze and evaluate their own learning. Contrarily, they are student-driven and occur throughout the learning process. 

    Assessment of learning (summative assessment) involves evidence of student learning to make judgments about student progress. They provide instructors with the opportunity to report evidence of meeting course objectives and typically occur at the end of a learning cycle using a variety of tools. The evaluation compares assessment information against criteria based on curriculum outcomes for the purpose of communicating to students about student progress and making informed decisions about the teaching and learning process.

    What are Diagnostic Assessments Used For?

    Students may write examples of diagnostic assessments to help professors gain insight into their existing awareness and capabilities both preceding and following instruction. As such, a diagnostic evaluation can be either:

    • A pre-course diagnostic assessment
    • A post-course diagnostic assessment

    Upon completion of a post-course diagnostic assessment, a professor can compare it against the student’s pre-course diagnostic assessment for that same course and semester in order to identify possible improvements in various specific areas. Professors can then use this information to adjust and adapt their curricula to better meet the needs of future students.

    Professors can utilize diagnostic assessment in education to plan individualized learning experiences for each student that provide both efficient and meaningful instruction.

    Examples of Diagnostic Assessment Tools and Technologies

    There are many different educational tools and technologies that enable professors and students to get instant results from learning, including Top Hat, Socrative, Kahoot, Quizziz, Mentimeter and Quizlet. Within each of these tools and technologies are several different examples of diagnostic assessments you can apply to various disciplines.

    Diagnostic Assessment Examples

    Diagnostic assessments can be conducted in many different ways, including as sets of written questions, such as in short answer or multiple choice form, as well as reflection exercises, long answer questions and creative projects.

    In courses containing group work, useful types of diagnostic assessments may include self-assessments in which group members each rate themselves based on various guidelines. The group then collects specific samples of each member’s prior work to understand the member’s mindset that led that member to give him or herself that rating.

    Different types of diagnostic assessments include:

    • Anticipation guides
    • Conference/interview
    • Formal assessment
    • Gap-closing
    • Graffiti walls
    • Journals
    • KWL
    • Mind maps
    • Parallel activity
    • Performance tasks
    • Posters
    • Quiz/test
    • Student surveys
    • Word splash

    Below, we share examples of how diagnostic assessments can be implemented in different disciplines, as well as easy-to-use tools that streamline the assessment design process for instructors.

    Diagnostic Assessment Examples for Physics

    In physics courses, instructors issue a set of conceptual questions to students at the start of the semester in order to assess the students’ current understanding of the fundamentals of physics.

    In certain educational disciplines, standardized diagnostic assessment examples have been developed that instructors can use for any course within that discipline. In physics, one of the most commonly used examples of diagnostic assessment is the Force Concept Inventory, which contains question sets about concepts, like gravity, velocity, mass and force, which are typically taught in a basic first-semester Newtonian physics course.

    Tools for Diagnostic Assessments in Physics

    Physics instructors can use Top Hat’s Polls and Quizzes feature to design diagnostic evaluations that engage students effectively. Use polls to demonstrate student understanding and see which course concepts may need further review. Frequent quizzes can be used to help students challenge themselves.

    Top Hat’s surveys and polls tools include checkpoints to help break lectures up into more manageable chunks, prompt discussions and motivate students to apply what they learn. Top Hat’s in-class polls and quizzes are multimedia-rich, helping professors engage students fully in the learning and assessment process. Examples of diagnostic assessment in education using these tools include click-on-target, word answer and word matching.

    Diagnostic Assessment Examples for Psychology

    The professor may conduct a survey in order to evaluate assumptions students currently hold about concepts like the nature of the mind versus human behavior.

    In psychology or sociology courses dealing with controversial or sensitive topics, instructors may conduct student surveys to allow learners to pose questions or potentially controversial viewpoints anonymously, allowing for more open classroom discussions and more thorough understandings of preconceived notions students might hold. 

    Examples of Diagnostic Assessment in Psychology Tools

    Socrative is a quiz and assessment website that lets instructors design interactive quizzes particularly suitable for complex topics in psychology, like bio-psychology, criminological psychology, statistics and research methods.

    D2L lets instructors create several types of diagnostic assessments for psychology, including quizzes, surveys and self-assessments.

    Diagnostic Assessment Examples for Creative and Fine Arts

    Instructors can also use pre-assessment and self-assessment tests to help better direct their effort to inspire their students to engage with class material by seeing what students already comprehend about the complexities of the creative process. They can also collect initial portfolios to judge fine-arts students’ artistic abilities while simultaneously conveying the course objectives.

    Examples of Diagnostic Assessment in Education in Creative Arts and Fine Arts Tools

    Besides allowing professors to create customized short-form quizzes, Canvas Quizzes also contains a special “Assignments” feature that lets students upload a file for assessment. This can include a piece of creative written, illustrated or even audio/visual material. That flexibility of media allows professors to examine a broader range of skills and competencies than can be assessed through simple question and answer assessments alone.

    Diagnostic Assessment Examples for STEM courses

    More than other subjects, math can create a particularly large amount of anxiety in students who struggle with the subject, yet it can be significantly more difficult for instructors to target math interventions for students. If math anxiety and issues with math aren’t properly identified and targeted soon enough, however, they could easily escalate into much more deeply-rooted learning problems even more challenging for students to overcome.

    Diagnostic assessments help professors gauge students’ current level of competency in complex problem-solving in a number of prerequisite areas before beginning to teach them concepts intended to build upon that knowledge. This may include basic algebraic manipulations, cell cycles, solving equations and chemical equations. By implementing data-driven approaches, professors can specifically examine how students think about math and what strategies and skills they bring with them to approach a math problem.

    An effective diagnostic assessment for math typically examines only one skill or set of skills at a time. That way, professors can more easily identify areas and concepts where students may be in need of further review.

    Tools for Diagnostic Assessments in STEM courses

    Top Hat offers a suite of secure tests and exam features that allow instructors to create diagnostic assessments for both in-person and online learning settings with equal ease and efficiency. Whether remote and proctored diagnostic assessment for math or on-premise and open book diagnostic assessment for math, Top Hat’s secure tests and exams feature lets you choose from 14 different question types or access the Top Hat Catalog and select from a variety of pre-made test banks for mathematics diagnostic assessment.

    For online testing, you can verify identities and devices, monitor activity and receive reports flagging irregular behavior. You can create, deploy and receive exams all in one place and have the exams auto-graded. Helping make mathematics diagnostic assessment easier, you can also customize question types and assignment settings and you can let students upload mathematics diagnostic assessment projects as PDF files, spreadsheets and slide presentations.

    Key Examples of Diagnostic Assessments

    Unit Pretests

    Unit pretests are a type of diagnostic evaluation tool that does not involve students receiving any grades. Instead, unit pretests are a diagnostic test in education example of how to determine a student’s awareness of a certain unit or module of learning within a larger course before proceeding to learn it. This type of diagnostic test in education example may include multiple-choice or fill-in-the-blank questions, as opposed to those of a more open-ended nature. For best use of these examples of a diagnostic test in education, unit pretests are most effective when concentrated on the core skill or concept for students to understand rather than the finer minutiae of the subject matter.

    Exit Tickets

    Exit tickets are a straightforward example of how to most effectively gauge student understanding after teaching a lesson, when you’re looking to see how effectively your students have met the objectives for that lesson or unit.

    Instructors ask students a simple question relating to a key concept taught in the lesson they’ve just concluded. Students jot down their answers on a “ticket” they deliver to the instructor upon their “exit” from the classroom. This allows instructors to adapt and adjust their curriculum for the following lesson or semester to align actual exit ticket results more closely with desired outcomes.

    Conclusion

    Diagnostic assessment examples like these provide instructors insights that help them to better create curricula customized to their students’ current level of knowledge, skills, strengths and weaknesses and, thereby, to better aid their students in achieving the objectives of the course. Likewise, professors can apply examples of diagnostic assessment in education like these after teaching a lesson or course in order to determine how well the objectives for that lesson or course were met and, based on that information, better strategize and adapt the curriculum for the next lesson or course.

    As these diagnostic assessment examples show, diagnostic evaluations are generally informal and simple to use. They typically require no high-level training to create and don’t require following any standardized protocol. Instructors can alter or more finely tune their assessment methods any time they wish. Instructors can share what they discover through the various types of diagnostic assessments they use with their peers quickly and easily. These examples of diagnostic tests in education and others like them work for any discipline and, most importantly, once applied with the right tools and technologies, diagnostic assessments in education show fast and efficient results.

    Tagged as:

    ,

    Source link