Tag: Feedback

  • More Than a Name: How Assignment Labels Influence Student Learning and Performance – Faculty Focus

    More Than a Name: How Assignment Labels Influence Student Learning and Performance – Faculty Focus

    Source link

  • Math is Out, Cake is In: Introducing Students to Rubrics – Faculty Focus

    Math is Out, Cake is In: Introducing Students to Rubrics – Faculty Focus

    Source link

  • How can students’ module feedback help prepare for success in NSS?

    How can students’ module feedback help prepare for success in NSS?

    Since the dawn of student feedback there’s been a debate about the link between module feedback and the National Student Survey (NSS).

    Some institutions have historically doubled down on the idea that there is a read-across from the module learning experience to the student experience as captured by NSS and treated one as a kind of “dress rehearsal” for the other by asking the NSS questions in module feedback surveys.

    This approach arguably has some merits in that it sears the NSS questions into students’ minds to the point that when they show up in the actual NSS it doesn’t make their brains explode. It also has the benefit of simplicity – there’s no institutional debate about what module feedback should include or who should have control of it. If there isn’t a deep bench of skills in survey design in an institution there could be a case for adopting NSS questions on the grounds they have been carefully developed and exhaustively tested with students. Some NSS questions have sufficient relevance in the module context to do the job, even if there isn’t much nuance there – a generic question about teaching quality or assessment might resonate at both levels, but it can’t tell you much about specific pedagogic innovations or challenges in a particular module.

    However, there are good reasons not to take this “dress rehearsal” approach. NSS endeavours to capture the breadth of the student experience at a very high level, not the specific module experience. It’s debatable whether module feedback should even be trying to measure “experience” – there are other possible approaches, such as focusing on learning gains, or skills development, especially if the goal is to generate actionable feedback data about specific module elements. For both students and academics seeing the same set of questions repeated ad nauseam is really rather boring, and is as likely to create disengagement and alienation from the “experience” construct NSS proposes than a comforting sense of familiarity and predictability.

    But separating out the two feedback mechanisms entirely doesn’t make total sense either. Though the totemic status of NSS has been tempered in recent years it remains strategically important as an annual temperature check, as a nationally comparable dataset, as an indicator of quality for the Teaching Excellence Framework and, unfortunately, as a driver of league table position. Securing consistently good NSS scores, alongside student continuation and employability, will feature in most institutions’ key performance indicators and, while vice chancellors and boards will frequently exercise their critical judgement about what the data is actually telling them, when it comes to the crunch no head of institution or board wants to see their institution slip.

    Module feedback, therefore, offers an important “lead indicator” that can help institutions maximise the likelihood that students have the kind of experience that will prompt them to give positive NSS feedback – indeed, the ability to continually respond and adapt in light of feedback can often be a condition of simply sustaining existing performance. But if simply replicating the NSS questions at module level is not the answer, how can these links best be drawn? Wonkhe and evasys recently convened an exploratory Chatham House discussion with senior managers and leaders from across the sector to gather a range of perspectives on this complex issue. While success in NSS remains part of the picture for assigning value and meaning to module feedback in particular institutional contexts there is a lot else going on as well.

    A question of purpose

    Module feedback can serve multiple purposes, and it’s an open question whether some of those purposes are considered to be legitimate for different institutions. To give some examples, module feedback can:

    • Offer institutional leaders an institution-wide “snapshot” of comparable data that can indicate where there is a need for external intervention to tackle emerging problems in a course, module or department.
    • Test and evaluate the impact of education enhancement initiatives at module, subject or even institution level, or capture progress with implementing systems, policies or strategies
    • Give professional service teams feedback on patterns of student engagement with and opinions on specific provision such as estates, IT, careers or library services
    • Give insight to module leaders about specific pedagogic and curriculum choices and how these were received by students to inform future module design
    • Give students the opportunity to reflect on their own learning journey and engagement
    • Generate evidence of teaching quality that academic staff can use to support promotion or inform fellowship applications
    • Depending on the timing, capture student sentiment and engagement and indicate where students may need additional support or whether something needs to be changed mid-module

    Needless to say, all of these purposes can be legitimate and worthwhile but not all of them can comfortably coexist. Leaders may prioritise comparability of data ie asking the same question across all modules to generate comparable data and generate priorities. Similarly, those operating across an institution may be keen to map patterns and capture differences across subjects – one example offered at the round table was whether students had met with their personal tutor. Such questions may be experienced at department or module level as intrusive and irrelevant to more immediately purposeful questions around students’ learning experience on the module. Module leaders may want to design their own student evaluation questions tailored to inform their pedagogic practice and future iterations of the module.

    There are also a lot of pragmatic and cultural considerations to navigate. Everyone is mindful that students get asked to feed back on their experiences A LOT – sometimes even before they have had much of a chance to actually have an experience. As students’ lives become more complicated, institutions are increasingly wary of the potential for cognitive overload that comes with being constantly asked for feedback. Additionally, institutions need to make their processes of gathering and acting on feedback visible to students so that students can see there is an impact to sharing their views – and will confirm this when asked in the NSS. Some institutions are even building questions that test whether students can see the feedback loop being closed into their student surveys.

    Similarly, there is also a strong appreciation of the need to adopt survey approaches that support and enable staff to take action and adapt their practice in response to feedback, affecting the design of the questions, the timing of the survey, how quickly staff can see the results and the degree to which data is presented in a way that is accessible and digestible. For some, trusting staff to evaluate their modules in the way they see fit is a key tenet of recognising their professionalism and competence – but there is a trade-off in terms of visibility of data institution-wide or even at department or subject level.

    Frameworks and ecosystems

    There are some examples in the sector of mature approaches to linking module evaluation data to NSS – it is possible to take a data-led approach that tests the correlation between particular module evaluation question responses and corresponding NSS question outcomes within particular thematic areas or categories, and builds a data model that proposes informed hypotheses about areas of priority for development or approaches that are most likely to drive NSS improvement. This approach does require strong data analysis capability, which not every institution has access to, but it certainly warrants further exploration where the skills are there. The use of a survey platform like evasys allows for the creation of large module evaluation datasets that could be mapped on to NSS results through business intelligence tools to look for trends and correlations that could indicate areas for further investigation.

    Others take the view that maximising NSS performance is something of a red herring as a goal in and of itself – if the wider student feedback system is working well, then the result should be solid NSS performance, assuming that NSS is basically measuring the right things at a high level. Some go even further and express concern that over-focus on NSS as an indicator of quality can be to the detriment of designing more authentic student voice ecosystems.

    But while thinking in terms of the whole system is clearly going to be more effective than a fragmented approach, given the various considerations and trade-offs discussed it is genuinely challenging for institutions to design such effective ecosystems. There is no “right way” to do it but there is an appetite to move module feedback beyond the simple assessment of what students like or don’t like, or the checking of straightforward hygiene factors, to become a meaningful tool for quality enhancement and pedagogic innovation. There is a sense that rather than drawing direct links between module feedback and NSS outcomes, institutions would value a framework-style approach that is able to accommodate the multiple actors and forms of value that are realised through student voice and feedback systems.

    In the coming academic year Wonkhe and evasys are planning to work with institutional partners on co-developing a framework or toolkit to integrate module feedback systems into wider student success and academic quality strategies – contact us to express interest in being involved.

    This article is published in association with evasys.

    Source link

  • Machine learning technology is transforming how institutions make sense of student feedback

    Machine learning technology is transforming how institutions make sense of student feedback

    Institutions spend a lot of time surveying students for their feedback on their learning experience, but once you have crunched the numbers the hard bit is working out the “why.”

    The qualitative information institutions collect is a goldmine of insight about the sentiments and specific experiences that are driving the headline feedback numbers. When students are especially positive, it helps to know why, to spread that good practice and apply it in different learning contexts. When students score some aspect of their experience negatively, it’s critical to know the exact nature of the perceived gap, omission or injustice so that it can be fixed.

    Any conscientious module leader will run their eye down the student comments in a module feedback survey – but once you start looking across modules to programme or cohort level, or to large-scale surveys like NSS, PRES or PTES, the scale of the qualitative data becomes overwhelming for the naked eye. Even the most conscientious reader will find that bias sets in, as comments that are interesting or unexpected tend to be foregrounded as having greater explanatory power over those that seem run of the mill.

    Traditional coding methods for qualitative data require someone – or ideally more than one person – to manually break down comments into clauses or statements that can be coded for theme and sentiment. It’s robust, but incredibly laborious. For student survey work, where the goal might be to respond to feedback and make improvements at pace, institutions are open that this kind of robust analysis is rarely, if ever, the standard practice. Especially as resources become more constrained, devoting hours to this kind of detailed methodological work is rarely a priority.

    Let me blow your mind

    That is where machine learning technology can genuinely change the game. Student Voice AI was founded by Stuart Grey, an academic at the University of Strathclyde (now working at the University of Glasgow), initially to help analyse student comments for large engineering courses. Working with Advance HE he was able to train the machine learning model on national PTES and PRES datasets. Now, further training the algorithm on NSS data, Student Voice AI offers literally same-day analysis of student comments for NSS results for subscribing institutions.

    Put the words “AI” and “student feedback” in the same sentence and some people’s hackles will immediately rise. So Stuart spends quite a lot of time explaining how the analysis works. The word he uses to describe the version of machine learning Student Voice AI deploys is “supervised learning” – humans manually label categories in datasets and “teach” the machine about sentiment and topic. The larger the available dataset the more examples the machine is exposed to and the more sophisticated it becomes. Through this process Student Voice AI has landed on a discreet number of comment themes and categories for taught students and the same for postgraduate research students that the majority of student comments consistently fall into – trained on and distinctive to UK higher education student data. Stuart adds that the categories can and do evolve:

    “The categories are based on what students are saying, not what we think they might be talking about – or what we’d like them to be talking about. There could be more categories if we wanted them, but it’s about what’s digestible for a normal person.”

    In practice that means that institutions can see a quantitative representation of their student comments, sorted by category and sentiment. You can look at student views of feedback, for example, and see the balance of positive, neutral and negative sentiment, overall, segment it into departments or subject areas, or years of study, then click through to see the relevant comments to see what’s driving that feedback. That’s significantly different from, say, dumping your student comments into a third party generative AI platform (sharing confidential data with a third party while you’re at it) and asking it to summarise. There’s value in the time and effort saved, but also in the removal of individual personal bias, and the potential for aggregation and segmentation for different stakeholders in the system. And it also becomes possible to compare student qualitative feedback across institutions.

    Now, Student Voice AI is partnering with student insight platform evasys to bring machine learning technology to qualitative data collected via the evasys platform. And evasys and Student Voice AI have been commissioned by Advance HE to code and analyse open comments from the 2025 PRES and PTES surveys – creating opportunities to drill down into a national dataset that can be segmented by subject discipline and theme as well as by institution.

    Bruce Johnson, managing director at evasys is enthused about the potential for the technology to drive culture change both in how student feedback is used to inform insight and action across institutions:

    “When you’re thinking about how to create actionable insight from survey data the key question is, to whom? Is it to a module leader? Is it to a programme director of a collection of modules? Is it to a head of department or a pro vice chancellor or the planning or quality teams? All of these are completely different stakeholders who need different ways of looking at the data. And it’s also about how the data is presented – most of my customers want, not only quality of insight, but the ability to harvest that in a visually engaging way.”

    “Coming from higher education it seems obvious to me that different stakeholders have very different uses for student feedback data,” says Stuart Grey. “Those teaching at the coalface are interested in student engagement; at the strategic level the interest is in strategic level interest in trends and sentiment analysis and there are also various stakeholder groups in professional services who never get to see this stuff normally, but we can generate the reports that show them what students are saying about their area. Frequently the data tells them something they knew anyway but it gives them the ammunition to be able to make change.”

    The results are in

    Duncan Berryman, student surveys officer at Queens University Belfast, sums up the value of AI analysis for his small team: “It makes our life a lot easier, and the schools get the data and trends quicker.” Previously schools had been supplied with Excel spreadsheets – and his team were spending a lot of time explaining and working through with colleagues how to make sense of the data on those spreadsheets. Being able to see a straightforward visualisation of student sentiment on the various themes means that, as Duncan observes rather wryly, “if change isn’t happening it’s not just because people don’t know what student surveys are saying.”

    Parama Chaudhury, professor of economics and pro vice provost education (student academic experience) at University College London explains where qualitative data analysis sits in the wider ecosystem for quality enhancement of teaching and learning. In her view, for enhancement purposes, comparing your quantitative student feedback scores to those of another department is not particularly useful – essentially it’s comparing apples with oranges. Yet the apparent ease of comparability of quantitative data, compared with the sense of overwhelm at the volume and complexity of student comments, can mean that people spend time trying to explain the numerical differences, rather than mining the qualitative data for more robust and actionable explanations that can give context to your own scores.

    It’s not that people weren’t working hard on enhancement, in other words, but they didn’t always have the best possible information to guide that work. “When I came into this role quite a lot of people were saying ‘we don’t understand why the qualitative data is telling us this, we’ve done all these things,’” says Parama. “I’ve been in the sector a long time and have received my share of summaries of module evaluations and have always questioned those summaries because it’s just someone’s ‘read.’ Having that really objective view, from a well-trained algorithm makes a difference.”

    UCL has tested two-page summaries of student comments to specific departments this academic year, and plans to roll out a version for every department this summer. The data is not assessed in a vacuum; it forms part of the wider institutional quality assurance and enhancement processes which includes data on a range of different perspectives on areas for development. Encouragingly, so far the data from students is consistent with what has emerged from internal reviews, giving the departments that have had the opportunity to engage with it greater confidence in their processes and action plans.

    None of this stops anyone from going and looking at specific student comments, sense-checking the algorithm’s analysis and/or triangulating against other data. At the University of Edinburgh, head of academic planning Marianne Brown says that the value of the AI analysis is in the speed of turnaround – the institutionl carries out a manual reviewing process to be sure that any unexpected comments are picked up. But being able to share the headline insight at pace (in this case via a PowerBI interface) means that leaders receive the feedback while the information is still fresh, and the lead time to effect change is longer than if time had been lost to manual coding.

    The University of Edinburgh is known for its cutting edge AI research, and boasts the Edinburgh (access to) Language Models (ELM) a platform that gives staff and students access to generative AI tools without sharing data with third parties, keeping all user data onsite and secured. Marianne is clear that even a closed system like ELM is not appropriate for unfettered student comment analysis. Generative AI platforms offer the illusion of a thematic analysis but it is far from robust because generative AI operates through sophisticated guesswork rather than analysis of the implications of actual data. “Being able to put responses from NSS or our internal student survey into ELM to give summaries was great, until you started to interrogate those summaries. Robust validation of any output is still required,” says Marianne. Similarly Duncan Berryman observes: “If you asked a gen-AI tool to show you the comments related to the themes it had picked out, it would not refer back to actual comments. Or it would have pulled this supposed common theme from just one comment.”

    The holy grail of student survey practice is creating a virtuous circle: student engagement in feedback creates actionable data, which leads to education enhancement, and students gain confidence that the process is authentic and are further motivated to share their feedback. In that quest, AI, deployed appropriately, can be an institutional ally and resource-multiplier, giving fast and robust access to aggregated student views and opinions. “The end result should be to make teaching and learning better,” says Stuart Grey. “And hopefully what we’re doing is saving time on the manual boring part, and freeing up time to make real change.”

    Source link

  • Improve Student Feedback in 2025

    Improve Student Feedback in 2025

    Higher education is not only changing; it is racing ahead and as professors we must either catch up or fall behind! AI-powered grading is here, revolutionizing our assessment of students and offers insightful comments rather than some far-off fantasy. This is not only a need by 2025; it is also a must. Using AI-driven technologies, professors may at last escape tiresome grading and concentrate on what really counts—guiding students toward success. Let’s get into the details in this article!

     

    The Evolution of AI  

    Artificial intelligence (AI) is taking over rather than only invading education. You should be aware, AI is transforming classrooms all around from administrative automation to tailored learning paths. 

    The figures don’t lie: with a predicted 31.2% CAGR through 2030, the $5.88 billion worldwide AI in the education industry is rising. This fast expansion emphasizes one thing: higher education is heavily dependent on AI-powered solutions to improve feedback, simplify tests, and raise learning results.

     

    Key Benefits of AI-Powered Grading in 2025  

    Grading isn’t a never-ending cycle of late evenings and red pens! AI-powered grading is rewriting the rules and transforming a once time-consuming task into an instant, intelligent workflow. AI-driven systems automate grading across tests, essays, and even difficult responses in a quarter of the typical time, therefore eliminating the need for burying oneself in homework.

     

     

    And the resultant influence? Professors save up to 70% of grading time—time better used for real-world instruction, mentorship, and innovation rather than caught in an assessment cycle. AI is freeing professors to concentrate on what really counts—student success—not only saving time.

     

    How AI Enhances Student Learning & Engagement

    Grades are just numbers without context. Many times, traditional grading leaves students with unclear remarks or, worse, none at all. By giving rich, data-driven insights customized to every student, AI-powered grading transforms the game! 

    These clever technologies not only point out errors but also dissect replies, highlighting areas of strength and weakness with laser precision. The outcome of tailored, practical comments that enable students to advance more quickly than before. Customized feedback has been shown in studies to increase student performance by up to 40%; so, it is clear that intelligent grading results in intelligent learning.

    Trust us, this is about changing our assessment and enhancement of student learning, not only about efficiency. See the image below that sums up how AI elevates student learning and engagement! 

     

     

    Grading Powered by AI: Adoption Trends and Rates 

    Artificial intelligence is taking over at full speed; it is not invading higher education. According to a recent EDUCAUSE poll, 52% of institutions use AI to automate administrative tasks while 54% of them currently use it to influence curriculum design.

    Moreover, not only professors—43% of students actively use AI-powered products to improve their educational process.

    These figures clearly show that intelligent evaluation tools and AI-powered grading are not only becoming the new benchmark but also not new. AI is showing to be the future of tests as institutions hurry to improve efficiency, feedback, and learning results.

     

    AI Adoption in Higher Education

     

    Addressing Challenges and Ethical Considerations of AI Adoption

    Rising Artificial Intelligence-powered grading raises serious issues including algorithmic bias, data privacy, and a fear of losing human control. Is human touch ever replaceable by Artificial Intelligence grading? Should it? Institutions have to act early to guarantee ethical implementation: clear AI rules will help academics and students to know how AI evaluations operate.

    Frequent audits of Artificial Intelligence models help to reduce bias and guarantee equitable grading.

    • Combine artificial intelligence with human evaluation—automate the grunts but maintain human judgment in the loop.
    • Institutions can use AI’s efficiency without sacrificing academic integrity by aggressively addressing these concerns. 

     

    Creatrix Campus’s Role in AI-Powered Grading

    Grading should improve learning, not hinder it. Creatrix Campus transforms AI-powered grading into faster, smarter, and more informative evaluations. Our solution lets instructors focus on teaching and mentoring by automating tiresome chores and providing real-time, individualized feedback.

    Why Educators Trust Creatrix Campus: 

    • Accurate AI-driven grading
    • Real-time, tailored feedback
    • Smart analytics, identifying trends and learning gaps before they become issues
    • Integrates seamlessly with your LMS and other platforms.

    Smarter grading. Improved learning. Build the future of assessments together!

     

    Wrapping Up: AI-Powered Grading—The Future Right Now

    AI-powered grading is not only a development but also a revolution in how we evaluate, analyze, and improve student learning as we head farther toward 2025. AI is altering the professor’s job from cutting grading time to providing individualized feedback at scale, freeing more attention on teaching and mentoring than on administrative overburden.

    The next biggest question for higher ed leaders and assessment committees is not whether or not AI-powered grading should be embraced—rather, how quickly can we do it? Institutions may access smarter assessments, better learning outcomes, and a more agile academic ecosystem by adopting intelligent grading systems with a balanced approach—leveraging automation while keeping human oversight—by means of which they can maintain human control.

    Grade’s future is already here. All set to discover how artificial intelligence might change your university? Get in touch with Team Creatrix to see how we are enabling institutions to advance with AI-powered solutions! 

    .

    Source link

  • From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    From Feedback to Feedforward: Using AI-Powered Assessment Flywheel to Drive Student Competency – Faculty Focus

    Source link

  • Course-Correcting Mid-Semester: A Three Question Feedback Survey – Faculty Focus

    Course-Correcting Mid-Semester: A Three Question Feedback Survey – Faculty Focus

    Source link

  • The importance of consequential feedback

    The importance of consequential feedback

    Imagine this: a business student managing a virtual company makes a poor decision, leading to a simulated bankruptcy. Across campus, a medical student adjusts a treatment in a patient simulation and observes improvements in the virtual patient’s condition.

    When students practice in a simulated real-world environment they have access to a rich set of feedback information, including consequential feedback. Consequential feedback provides vital information about the consequences of students’ actions and decisions. Typically, though, in the perennial NSS-driven hand-wringing about improving feedback in higher education, we are thinking only about evaluative feedback information – when educators or peers critique students’ work and suggest improvements.

    There’s no doubt evaluative feedback, especially corrective feedback, is important. But if we’re only talking about evaluative feedback, we are missing whole swathes of invaluable feedback information crucial to preparing graduates for professional roles.

    In a recently published, open access paper in Assessment and Evaluation in Higher Education, we make the case for educators to design for and support students in noticing, interpreting and learning from consequential feedback information.

    What’s consequential feedback?

    Consequential feedback involves noticing the connection between actions and their outcomes (consequences). For example, if we touch a hot stove, we get burned. In this example, noticing the burn is both immediate and obvious. Connecting it to the action of touching the stove is also easy – little interpretation needs to be made. However, there are many cause-effect (action-consequence) sequences embedded in professional practice that are not so easy to connect. Students may need help in noticing the linkages, interpreting them and making corrections to their actions to lead to better consequences in the future.

    For instance, the business student above might decide on a pricing strategy and observe its effect on market share. The simulation speeds up time so students can observe the effects of price change on sales and market share. In real life, observing the consequences of a pricing change might take weeks or months. Through the simulation, learners can experiment with different pricing strategies, making different assumptions about the market, and observing the effects, to build their understanding of how these two variables are linked under different conditions. Critically, they learn the importance of this linkage so they can monitor in the messier, delayed real life situations they might face as a marketing professional.

    Consequential feedback isn’t just theoretical. It is already making an impact in diverse educational fields such as healthcare, business, mathematics and the arts. But the disparate literature we reviewed almost never names this information as consequential feedback. To improve feedback in higher education, we need to be able to talk to educators and students explicitly about this rich font of feedback information. We need a language for it so we can explore how it is distinct from and complementary to evaluative feedback. Naming it allows us to deliberately practice different ways of enhancing it and build evidence about how to teach students to use it well.

    Why does it matter?

    Attending to consequential feedback shifts the focus from external judgments of quality to an internalised understanding of cause and effect. It enables students to experience the results of their decisions and use these insights to refine their practice. Thus, it forms the grist for reflective thinking and a host of twenty-first century skills needed to solve the world’s most pressing problems.

    In “real-life” after university, graduates are unlikely to have a mentor or teacher standing over them offering the kind of evaluative feedback that dominates discussion of feedback in higher education. Instead, they need to be able to learn independently from the consequential feedback readily available in the workplace and beyond. Drawing on consequential feedback information, professionals can continuously learn and adapt their practice to changing contexts. Thus, educators need to design opportunities that simulate professional practices, paying explicit attention to helping students learn from the consequential feedback afforded by these instructional designs.

    How can educators harness it?

    While consequential feedback is powerful, capitalising on it during higher education requires careful design. Here are some strategies for educators to try in their practice:

    Use simulations, role-plays, and projects: Simulations provide a controlled environment where students can explore the outcomes of their actions. For example, in a healthcare setting, students might use patient mannequins or virtual reality tools to practice diagnostic and treatment skills. In a human resources course, students might engage in mediation role plays. In an engineering course, students could design and test products like model bridges or rockets.

    Design for realism: Whenever possible, feedback opportunities should replicate real-world conditions. For instance, a law student participating in a moot court can see how their arguments hold up under cross-examination or a comedy student can see how a real audience responds to their show.

    Encourage reflection: Consequential feedback is most effective when paired with reflection. Educators can prompt students to consider questions such as: What did you do? Why? What happened when you did x? Was y what you expected or wanted? How do these results compare to professional standards? Why did you get that result? What could you change to get the results you want?

    Pair with evaluative feedback: Students may see that they didn’t get the result they wanted but not know how to correct their actions. Consequential feedback doesn’t replace evaluative feedback; it complements it. For example, after a business simulation, an instructor might provide additional guidance on interpreting KPIs or suggest strategies for improvement. This pairing helps students connect outcomes with actionable next steps.

    Shifting the frame

    Focusing on consequential feedback represents a shift in how we think about assessment, feedback, and learning itself. By designing learning experiences that allow students to act and observe the natural outcomes of their actions, we create opportunities for deeper, more meaningful engagement in the learning process. As students study the impact of their actions, they learn to take responsibility for their choices. This approach fosters the problem-solving, adaptability, independence, and professional and social responsibility they’ll need throughout their lives.

    A key question educators should be asking is: how can I help students recognise and learn from the outcomes of their actions? The answer lies in designing for and highlighting consequential feedback.

    Source link

  • Florida Phone Ban in School Gets Mostly Positive Feedback from Administrators – The 74

    Florida Phone Ban in School Gets Mostly Positive Feedback from Administrators – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    School administrators provided mostly positive feedback to lawmakers curious about implementation of a 2023 law prohibiting students from using their phones.

    School officials provided the House Student Academic Success subcommittee feedback last week on HB 379, a 2023 law that prohibits phone use during instructional time, prohibits access to certain websites on school networks, and requires instruction to students to responsibly use social media.

    “It’s gone very very well in many of our classrooms, especially I would say it goes really well in our classrooms with struggling learners. The teachers have seen the benefit of that increased interaction with each other, the increased focus,” said Toni Zetzsche, principal of River Ridge High School in Pasco County.

    The law, introduced by Rep. Brad Yeager, a Republican representing part of Pasco County,  received unanimous support before serving as a sort of model legislation across the nation.

    “The first step of this process: remove phones from the classroom, focus on learning, take the distraction out. Number two was, social media, without just yanking it from them, try to educate them on the dangers. Try to help to learn and understand how social media works for them and against them,” Yeager said during the subcommittee meeting.

    An EducationWeek analysis shows Florida was the first state to ban or restrict phones when the law passed, with several other states following suit in 2024.

    Florida schools have discretion as to how they enforce the law, with some prohibiting cellphones from the beginning until the end of the day, while others allow students to use their phones during down times like lunch and between classes.

    Some teachers have taken it upon themselves to purchase hanging shoe organizers for students to bank their phones in during class, Yeager said.

    Since the law took effect in the middle of 2023, Zetzsche said, students in higher level college preparatory classes have partially struggled because of the self-regulating nature of the courses and the expectation that teachers give them more freedom.

    But for younger and lower-performing students, the law has been effective, according to Zetzsche and research Yeager used to gain support for the bill.

    “In some of our ninth and tenth grade classrooms, where the kids need a little more support, those teachers are definitely seeing the benefit,” Zetzsche said.

    Orange County Schools Superintendent Maria Vazquez said schools have combatted student complaints about not having their phones by filling down time, like lunch periods, with games or club activities.

    Zetzsche said she has seen herself and others use the phoneless time as an opportunity to get to know more students.

    “I know I’ve spoken with teachers, elementary teachers, middle school teachers, and high school teachers that have said, ‘I’ve had to teach students to reconnect and get involved or talk to people.’ They are doing a better job of focusing on that replacement behavior now, I think. I think we all are,” Zetzsche said.

    “I think, as a high school principal now, when I see a student sitting in the cafeteria and they’re on their cellphone watching a movie, I immediately want to strike up a conversation and say, ‘Hey, are you on the weightlifting team? Do you play a sport?’” Zetzsche said.

    Bell to bell

    Orange County schools decided not to allow phones all day, while Pasco County chose to keep phones away from students during instructional time, the extent the law requires.

    “It was surprisingly, and shockingly, pretty easy to implement,” Marc Wasko, principal at Timber Creek High School in Orange County, told the subcommittee.

    Rep. Fiona McFarland, a Republican representing part of Sarasota County and the chair of the subcommittee, encouraged further planning to better enforce the law.

    “I will tell you, because not everything we do up here is perfect, there are some schools that I’ve heard of where, even if the teacher has a bag, kids are bringing a dummy phone, like mom’s old iPhone, and flipping that into the pouch where they’ve got their device in their pocket or if you’ve got long hair, maybe you can hide earbuds,” McFarland said.

    “I mean, this is the reality of being policymakers, folks,” McFarland continued. “We make a law, we can make the greatest law in the world, which is meaningless if it’s not executed and enforced properly. We could pass a law tomorrow to end world hunger and global peace, but it means nothing if it’s not operationalized well and planned for well.”

    Yeager told the committee he does not plan to seek to ban phones outside of instructional time, although other lawmakers could push for further phone prohibitions.

    Department of Education obligation

    The law requires the Department of Education to make instructional material available on the effects of social media, required for students to learn under the law.

    “Finding the time to be able to embed that into the curriculum is really difficult. We are struggling with instructional minutes as it is, when we have things like hurricanes impact learnings,” Zetzsche said.

    “We are struggling to get through the content, so it would be nice to have something from the Department of Education that is premade that we can share with students, but maybe through elective courses or some guidance on how they would expect high schools, how they would feed that information to students.”

    Administrators said parental pushback has been limited, and Zetzsche added that parents have sought advice from schools about how to detach their kids from their phones.

    “When we struggle with the student who’s attached to their cellphone, the parents want to put things in place.
They just don’t know what to do,” Zetzsche said, calling for the department to provide additional information to parents.

    Florida Phoenix is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Florida Phoenix maintains editorial independence. Contact Editor Michael Moline for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link