Tag: Understanding

  • Adult Student Priorities Survey: Understanding Your Adult Learners 

    Adult Student Priorities Survey: Understanding Your Adult Learners 

    The Adult Student Priorities Survey (ASPS) is the instrument in the family of Satisfaction-Priorities Surveys that best captures the experiences of graduate level students and adult learners in undergraduate programs at four-year institutions. The Adult Student Priorities Survey provides the student perspectives for non-traditional populations along with external national benchmarks to inform decision-making for nearly 100 institutions across the country.

    Why the Adult Student Priorities Survey matters

    As a comprehensive survey instrument, the Adult Student Priorities Survey assesses student satisfaction within the context of the level of importance that students place on a variety of experiences, both inside and outside of the classroom. The combination of satisfaction and importance scores provides the identification of institutional strengths (areas of high importance and high satisfaction) and institutional challenges (areas of high importance and low satisfaction). Strengths can be celebrated, and challenges can be addressed by campus leadership to build on the good where possible and to re-enforce other areas where needed.

    With the survey implementation, all currently enrolled students (based on who the institution wants to include) can provide feedback on their experiences with instruction, advising, registration, recruitment/financial aid, support services and how they feel as a student at the institution. The results deliver external benchmarks with other institutions serving adult learners, including data that is specific to graduate programs, and the ability to monitor internal benchmarks when the survey is administered over multiple years. (The national student satisfaction results are published annually). The delivered results also provide the option to analyze subset data for all standard and customizable demographic indicators to understand where targeted initiatives may be required to best serve student populations.

    Connecting ASPS data to student success and retention

    Like the Student Satisfaction Inventory and the Priorities Survey for Online Learners (the other survey instruments in the Satisfaction-Priorities family), the data gathered by the Adult Student Priorities Survey can support multiple initiatives on campus including to inform student success efforts, to provide the student voice for strategic planning, to document priorities for accreditation purposes and to highlight positive messaging for recruitment activities. Student satisfaction has been positively linked with higher individual student retention and higher institutional graduation rates, getting right to the heart of higher education student success.

    Learn more about best practices for administering the online Adult Student Priorities Survey at your institution, which can be done any time during the academic year on the institutions’ timeline.

    Ask for a complimentary consultation with our student success experts

    What is your best approach to increasing student retention and completion? Our experts can help you identify roadblocks to student persistence and maximize student progression. Reach out to set up a time to talk.

    Request now

    Source link

  • Understanding how inflation affects teacher well-being and career decisions

    Understanding how inflation affects teacher well-being and career decisions

    Key points:

    In recent years, the teaching profession has faced unprecedented challenges, with inflation emerging as a significant factor affecting educators’ professional lives and career choices. This in-depth examination delves into the complex interplay between escalating inflation rates and the self-efficacy of educators–their conviction in their capacity to proficiently execute their pedagogical responsibilities and attain the desired instructional outcomes within the classroom environment.

    The impact of inflation on teachers’ financial stability has become increasingly evident, with many educators experiencing a substantial decline in their “real wages.” While nominal salaries remain relatively stagnant, the purchasing power of teachers’ incomes continues to erode as the cost of living rises. This economic pressure has created a concerning dynamic where educators, despite their professional dedication, find themselves struggling to maintain their standard of living and meet basic financial obligations.

    A particularly troubling trend has emerged in which teachers are increasingly forced to seek secondary employment to supplement their primary income. Recent surveys indicate that approximately 20 percent of teachers now hold second jobs during the academic year, with this percentage rising to nearly 30 percent during summer months. This necessity to work multiple jobs can lead to physical and mental exhaustion, potentially compromising teachers’ ability to maintain the high levels of energy and engagement required for effective classroom instruction.

    The phenomenon of “moonlighting” among educators has far-reaching implications for teacher self-efficacy. When teachers must divide their attention and energy between multiple jobs, their capacity to prepare engaging lessons, grade assignments thoroughly, and provide individualized student support may be diminished. This situation often creates a cycle where reduced performance leads to decreased self-confidence, potentially affecting both teaching quality and student outcomes.

    Financial stress has also been linked to increased levels of anxiety and burnout among teachers, directly impacting their perceived self-efficacy. Studies have shown that educators experiencing financial strain are more likely to report lower levels of job satisfaction and decreased confidence in their ability to meet professional expectations. This psychological burden can manifest in reduced classroom effectiveness and diminished student engagement.

    Perhaps most concerning is the growing trend of highly qualified educators leaving the profession entirely for better-paying opportunities in other sectors. This “brain drain” from education represents a significant loss of experienced professionals who have developed valuable teaching expertise. The exodus of talented educators not only affects current students but also reduces the pool of mentor teachers available to guide and support newer colleagues, potentially impacting the professional development of future educators.

    The correlation between inflation and teacher attrition rates has become increasingly apparent, with economic factors cited as a primary reason for leaving the profession. Research indicates that districts in areas with higher costs of living and significant inflation rates experience greater difficulty in both recruiting and retaining qualified teachers. This challenge is particularly acute in urban areas where housing costs and other living expenses have outpaced teacher salary increases.

    Corporate sectors, technology companies, and consulting firms have become attractive alternatives for educators seeking better compensation and work-life balance. These career transitions often offer significantly higher salaries, better benefits packages, and more sustainable working hours. The skills that make effective teachers, such as communication, organization, and problem-solving, are highly valued in these alternative career paths, making the transition both feasible and increasingly common.

    The cumulative effect of these factors presents a serious challenge to the education system’s sustainability. As experienced teachers leave the profession and prospective educators choose alternative career paths, schools face increasing difficulty in maintaining educational quality and consistency. This situation calls for systematic changes in how we value and compensate educators, recognizing that teacher self-efficacy is intrinsically linked to their financial security and professional well-being.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Understanding and writing the Literature Review in Mba Projects

    Understanding and writing the Literature Review in Mba Projects

    Understanding the Topic: Even before starting to write a student should be having a full clarity about the research title, the objective of the study and the research problems.

    Searching for Relevant Literature: Students should search the academic libraries like Google Scholars, Research Gate, JSTOR or Scopus.

    Evaluating Sources: Once relevant sources are collected students should analyze, evaluate the objective, findings and limitation of those studies.

    Grouping of Literature: Collected literature should be grouped as per the subheading of the required studies.

    Write Critically: Literature review should be written critically and analytically relevant to the study .

    Identify the Research Gap: Students should analyze and find the research gap and specify where his study will add value to those gaps.

    Cite & Reference: Students should use formats like APA (7th edition) or Havard Referencing style while using in text citing.

    Source link

  • OfS’ understanding of the student interest requires improvement

    OfS’ understanding of the student interest requires improvement

    When the Office for Students’ (OfS) proposals for a new quality assessment system for England appeared in the inbox, I happened to be on a lunchbreak from delivering training at a students’ union.

    My own jaw had hit the floor several times during my initial skim of its 101 pages – and so to test the validity of my initial reactions, I attempted to explain, in good faith, the emerging system to the student leaders who had reappeared for the afternoon.

    Having explained that the regulator was hoping to provide students with a “clear view of the quality of teaching and learning” at the university, their first confusion was tied up in the idea that this was even possible in a university with 25,000 students and hundreds of degree courses.

    They’d assumed that some sort of dashboard might be produced that would help students differentiate between at least departments if not courses. When I explained that the “view” would largely be in the form of a single “medal” of Gold, Silver, Bronze or Requires improvement for the whole university, I was met with confusion.

    We’d spent some time before the break discussing the postgraduate student experience – including poor induction for international students, the lack of a policy on supervision for PGTs, and the isolation that PGRs had fed into the SU’s strategy exercise.

    When I explained that OfS was planning to introduce a PGT NSS in 2028 and then use that data in the TEF from 2030-31 – such that their university might not have the data taken into account until 2032-33 – I was met with derision. When I explained that PGRs may be incorporated from 2030–31 onwards, I was met with scorn.

    Keen to know how students might feed in, one officer asked how their views would be taken into account. I explained that as well as the NSS, the SU would have the option to create a written submission to provide contextual insight into the numbers. When one of them observed that “being honest in that will be a challenge given student numbers are falling and so is the SU’s funding”, the union’s voice coordinator (who’d been involved in the 2023 exercise) in the corner offered a wry smile.

    One of the officers – who’d had a rewarding time at the university pretty much despite their actual course – wanted to know if the system was going to tackle students like them not really feeling like they’d learned anything during their degree. Given the proposals’ intention to drop educational gain altogether, I moved on at this point. Young people have had enough of being let down.

    I’m not at home in my own home

    Back in February, you might recall that OfS published a summary of a programme of polling and focus groups that it had undertaken to understand what students wanted and needed from their higher education – and the extent to which they were getting it.

    At roughly the same time, it published proposals for a new initial Condition C5: Treating students fairly, to apply initially to newly registered providers, which drew on that research.

    As well as issues it had identified with things like contractual provisions, hidden costs and withdrawn offers, it was particularly concerned with the risk that students may take a decision about what and where to study based on false, misleading or exaggerated information.

    OfS’ own research into the Teaching Excellence Framework 2023 signals one of the culprits for that misleading. Polling by Savanta in April and May 2024, and follow-up focus groups with prospective undergraduates over the summer both showed that applicants consistently described TEF outcomes as too broad to be of real use for their specific course decisions.

    They wanted clarity about employability rates, continuation statistics, and job placements – but what they got instead was a single provider-wide badge. Many struggled to see meaningful differences between Gold and Silver, or to reconcile how radically different providers could both hold Gold.

    The evidence also showed that while a Gold award could reassure applicants, more than one in five students aware of their provider’s TEF rating disagreed that it was a fair reflection of their own experience. That credibility gap matters.

    If the TEF continues to offer a single label for an entire university, with data that are both dated and aggregated, there is a clear danger that students will once again be misled – this time not by hidden costs or unfair contracts, but by the regulatory tool that is supposed to help them make informed choices.

    You don’t know what I’m feeling

    Absolutely central to the TEF will remain results of the National Student Survey (NSS).

    OfS says that’s because “the NSS remains the only consistently collected, UK-wide dataset that directly captures students’ views on their teaching, learning, and academic support,” and because “its long-running use provides reliable benchmarked data which allows for meaningful comparison across providers and trends over time.”

    It stresses that the survey provides an important “direct line to student perceptions,” which balances outcomes data and adds depth to panel judgements. In other words, the NSS is positioned as an indispensable barometer of student experience in a system that otherwise leans heavily on outcomes.

    But set aside the fact that it surveys only those who make it to the final year of a full undergraduate degree. The NSS doesn’t ask whether students felt their course content was up to date with current scholarship and professional practice, or whether learning outcomes were coherent and built systematically across modules and years — both central expectations under B1 (Academic experience).

    It doesn’t check whether students received targeted support to close knowledge or skills gaps, or whether they were given clear help to avoid academic misconduct through essay planning, referencing, and understanding rules – requirements spelled out in the guidance to B2 (Resources, support and engagement). It also misses whether students were confident that staff were able to teach effectively online, and whether the learning environment – including hardware, software, internet reliability, and access to study spaces – actually enabled them to learn. Again, explicit in B2, but invisible in the survey.

    On assessment, the NSS asks about clarity, fairness, and usefulness of feedback, but it doesn’t cover whether assessment methods really tested what students had been taught, whether tasks felt valid for measuring the intended outcomes, or whether students believed their assessments prepared them for professional standards. Yet B4 (Assessment and awards) requires assessments to be valid and reliable, moderated, and robust against misconduct – areas NSS perceptions can’t evidence.

    I could go on. The survey provides snapshots of the learning experience but leaves out important perception checks on the coherence, currency, integrity, and fitness-for-purpose of teaching and learning, which the B conditions (and students) expect providers to secure.

    And crucially, OfS has chosen not to use the NSS questions on organisation and management in the future TEF at all. That’s despite its own 2025 press release highlighting it as one of the weakest-performing themes in the sector – just 78.5 per cent of students responded positively – and pointing out that disabled students in particular reported significantly worse experiences than their peers.

    OfS said then that “institutions across the sector could be doing more to ensure disabled students are getting the high quality higher education experience they are entitled to,” and noted that the gap between disabled and non-disabled students was growing in organisation and management. In other words, not only is the NSS not fit for purpose, OfS’ intended use of it isn’t either.

    I followed the voice, you gave to me

    In the 2023 iteration of the TEF, the independent student submission was supposed to be one of the most exciting innovations. It was billed as a crucial opportunity for providers’ students to tell their own story – not mediated through NSS data or provider spin, but directly and independently. In OfS’ words, the student submission provided “additional insights” that would strengthen the panel’s ability to judge whether teaching and learning really were excellent.

    In this consultation, OfS says it wants to “retain the option of student input,” but with tweaks. The headline change is that the student submission would no longer need to cover “student outcomes” – an area that SUs often struggled with given the technicalities of data and the lack of obvious levers for student involvement.

    On the surface, that looks like a kindness – but scratch beneath the surface, and it’s a red flag. Part of the point of Condition B2.2b is that providers must take all reasonable steps to ensure effective engagement with each cohort of students so that “those students succeed in and beyond higher education.”

    If students’ unions feel unable to comment on how the wider student experience enables (or obstructs) student success and progression, that’s not a reason to delete it from the student submission. It’s a sign that something is wrong with the way providers involve students in what’s done to understand and shape outcomes.

    The trouble is that the light touch response ignores the depth of feedback it has already commissioned and received. Both the IFF evaluation of TEF 2023 and OfS’ own survey of student contacts documented the serious problems that student reps and students’ unions faced.

    They said the submission window was far too short – dropping guidance in October, demanding a January deadline, colliding with elections, holidays, and strikes. They said the guidance was late, vague, inaccessible, and offered no examples. They said the template was too broad to be useful. They said the burden on small and under-resourced SUs was overwhelming, and even large ones had to divert staff time away from core activity.

    They described barriers to data access – patchy dashboards, GDPR excuses, lack of analytical support. They noted that almost a third didn’t feel fully free to say what they wanted, with some monitored by staff while writing. And they told OfS that the short, high-stakes process created self-censorship, strained relationships, and duplication without impact.

    The consultation documents brush most of that aside. Little in the proposals tackles the resourcing, timing, independence, or data access problems that students actually raised.

    I’m not at home in my own home

    OfS also proposes to commission “alternative forms of evidence” – like focus groups or online meetings – where students aren’t able to produce a written submission. The regulator’s claim is that this will reduce burden, increase consistency, and make it easier to secure independent student views.

    The focus group idea is especially odd. Student representatives’ main complaint wasn’t that they couldn’t find the words – it was that they lacked the time, resource, support, and independence to tell the truth. Running a one-off OfS focus group with a handful of students doesn’t solve that. It actively sidesteps the standard in B2 and the DAPs rules on embedding students in governance and representation structures.

    If a student body struggles to marshal the evidence and write the submission, the answer should be to ask whether the provider is genuinely complying with the regulatory conditions on student engagement. Farming the job out to OfS-run focus groups allows providers with weak student partnership arrangements to escape scrutiny – precisely the opposite of what the student submission was designed to do.

    The point is that the quality of a student submission is not just a “nice to have” extra insight for the TEF panel. It is, in itself, evidence of whether a provider is complying with Condition B2. It requires providers to take all reasonable steps to ensure effective engagement with each cohort of students, and says students should make an effective contribution to academic governance.

    If students can’t access data, don’t have the collective capacity to contribute, or are cowed into self-censorship, that is not just a TEF design flaw – it is B2 evidence of non-compliance. The fact that OfS has never linked student submission struggles to B2 is bizarre. Instead of drawing on the submissions as intelligence about engagement, the regulator has treated them as optional extras.

    The refusal to make that link is even stranger when compared to what came before. Under the old QAA Institutional Review process, the student written submission was long-established, resourced, and formative. SUs had months to prepare, could share drafts, and had the time and support to work with managers on solutions before a review team arrived. It meant students could be honest without the immediate risk of reputational harm, and providers had a chance to act before being judged.

    TEF 2023 was summative from the start, rushed and high-stakes, with no requirement on providers to demonstrate they had acted on feedback. The QAA model was designed with SUs and built around partnership – the TEF model was imposed by OfS and designed around panel efficiency. OfS has learned little from the feedback from those who submitted.

    But now I’ve gotta find my own

    While I’m on the subject of learning, we should finally consider how far the proposals have drifted from the lessons of Dame Shirley Pearce’s review. Back in 2019, her panel made a point of recording what students had said loud and clear – the lack of learning gain in TEF was a fundamental flaw.

    In fact, educational gain was the single most commonly requested addition to the framework, championed by students and their representatives who argued that without it, TEF risked reducing success to continuation and jobs.

    Students told the review they wanted a system that showed whether higher education was really developing their knowledge, skills, and personal growth. They wanted recognition of the confidence, resilience, and intellectual development that are as much the point of university as a payslip.

    Pearce’s panel agreed, recommending that Educational Gains should become a fourth formal aspect of TEF, encompassing both academic achievement and personal development. Crucially, the absence of a perfect national measure was not seen as a reason to ignore the issue. Providers, the panel said, should articulate their own ambitions and evidence of gain, in line with their mission, because failing to even try left a gaping hole at the heart of quality assessment.

    Fast forward to now, and OfS is proposing to abandon the concept entirely. To students and SUs who have been told for years that their views shape regulation, the move is a slap in the face. A regulator that once promised to capture the full richness of the student experience is now narrowing the lens to what can be benchmarked in spreadsheets. The result is a framework that tells students almost nothing about what they most want to know – whether their education will help them grow.

    You see the same lack of learning in the handling of extracurricular and co-curricular activity. For students, societies, volunteering, placements, and cocurricular opportunities are not optional extras but integral to how they build belonging, develop skills, and prepare for life beyond university. Access to these opportunities feature heavily in the Access and Participation Risk Register precisely because they matter to student success and because they’re a part of the educational offer in and of themselves.

    But in TEF 2023 OfS tied itself in knots over whether they “count” — at times allowing them in if narrowly framed as “educational”, at other times excluding them altogether. To students who know how much they learn outside of the lecture theatre, the distinction looked absurd. Now the killing off of educational gain excludes them all together.

    You should have listened

    Taken together, OfS has delivered a masterclass in demonstrating how little it has learned from students. As a result, the body that once promised to put student voice at the centre of regulation is in danger of constructing a TEF that is both incomplete and actively misleading.

    It’s a running theme – more evidence that OfS is not interested enough in genuinely empowering students. If students don’t know what they can, should, or could expect from their education – because the standards are vague, the metrics are aggregated, and the judgements are opaque – then their representatives won’t know either. And if their reps don’t know, their students’ union can’t effectively advocate for change.

    When the only judgements against standards that OfS is interested in come from OfS itself, delivered through a very narrow funnel of risk-based regulation, that funnel inevitably gets choked off through appeals to “reduced burden” and aggregated medals that tell students nothing meaningful about their actual course or experience. The result is a system that talks about student voice while systematically disempowering the very students it claims to serve.

    In the consultation, OfS says that it wants its new quality system to be recognised as compliant with the European Standards and Guidelines (ESG), which would in time allow it to seek membership of the European Quality Assurance Register (EQAR). That’s important for providers with international partnerships and recruitment ambitions, and for students given that ESG recognition underpins trust, mobility, and recognition across the European Higher Education Area.

    But OfS’ conditions don’t require co-design of the quality assurance framework itself, nor proof that student views shape outcomes. Its proposals expand student assessor roles in the TEF, but don’t guarantee systematic involvement in all external reviews or transparency of outcomes – both central to ESG. And as the ongoing QA-FIT project and ESU have argued, the next revision of the ESG is likely to push student engagement further, emphasising co-creation, culture, and demonstrable impact.

    If it does apply for EQAR recognition, our European peers will surely notice what English students already know – the gap between OfS’ rhetoric on student partnership and the reality of its actual understanding and actions is becoming impossible to ignore.

    When I told those student officers back on campus that their university would be spending £25,000 of their student fee income every time it has to take part in the exercise, their anger was palpable. When I added that according to the new OfS chair, Silver and Gold might enable higher fees, while Bronze or “Requires Improvement” might cap or further reduce their student numbers, they didn’t actually believe me.

    The student interest? Hardly.

    Source link

  • Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Understanding Value of Learning Fuels ChatGPT’s Study Mode

    Photo illustration by Justin Morrison/Inside Higher Ed | SDI Productions/E+/Getty Images

    When classes resume this fall, college students will have access to yet another generative artificial intelligence tool marketed as a learning enhancement.

    But instead of generating immediate answers, OpenAI’s new Study Mode for ChatGPT acts more like a tutor, firing off questions, hints, self-reflection prompts and quizzes that are tailored to the user and informed by their past chat history. While traditional large language models have raised academic integrity concerns, Study Mode is intended to provide a more active learning experience. It mimics the type of Socratic dialogue students may expect to encounter in a lecture hall and challenges them to draw on information they already know to form their own nuanced analyses of complex questions.

    For example, when Inside Higher Ed asked the traditional version of ChatGPT which factors caused the United States Civil War, it immediately responded that the war had “several major causes, most of which revolved around slavery, states’ rights, and economic differences,” and gave more details about each before producing a five-paragraph essay on the topic. Asking Study Mode the same question, however, prompted it to give a brief overview before asking this question: “Would you say the war was fought because of slavery, or about something else like states’ rights or economics? There’s been debate over this, so I’d love to hear your view first. Then I’ll show you how historians analyze it today.”

    Study Mode is similar to the Learning Mode that Anthropic launched for its chat bot Claude for Education back in April and the Guided Learning version of Gemini that Google unveiled Wednesday. OpenAI officials say they hope Study Mode will “support deeper learning” among college students.

    While teaching and learning experts don’t believe such tools can replace the value faculty relationships and expertise offer students, Study Mode’s release highlights generative AI’s evolving possibilities—and limitations—as a teaching and learning aid. For students who choose to use it instead of asking a traditional LLM for answers, Study Mode offers an on-demand alternative to a human tutor, unbound by scheduling conflicts, payment or feedback delays.

    But in an economy where generative AI’s ability to gather and regurgitate information is threatening the future of entry-level office jobs, students will need to understand what they’re trying to get out of their college coursework if they want Study Mode to deliver on its promise to foster deeper learning.

    “Over all, it does represent a significant departure from how people are normally encouraged to use generative AI, given that the value proposition of this technology is largely couched in the concept of efficiency, speed and offloading things that are not as important,” said Trey Conatser, assistant provost for teaching and learning at the University of Kentucky and director of UK’s Center for the Enhancement of Teaching and Learning. “But Study Mode actually asks us to stick with it for it to work, take some time to sit with the friction of learning and be patient with the practice and feedback.”

    Critical Thinking in Age of AI

    The release of Study Mode comes nearly three years after the debut of ChatGPT—and the many other large language models that have followed—set off ongoing concerns among educators that college students’ increased reliance on traditional LLMs to write papers or solve math problems may erode their critical thinking skills.

    There’s not yet a consensus on if generative AI helps or hurts critical thinking, but at least two studies published in the last year found that adults, including college students, who use LLMs to research and learn about a subject exhibited weaker reasoning and analytical skills. And those may be the exact types of skills the roughly two-thirds who are already using generative AI tools like ChatGPT will need to make Study Mode reach its full potential.

    While ChatGPT’s Study Mode is programmed to query students rather than generate answers, the users themselves also need to be prepared to push the LLM to ask deeper, more nuanced questions. At least, that was Conatser’s experience when he tested out Study Mode over the weekend and asked it to help him study for a hypothetical quiz on Romantic poets from 1780 to 1820.

    First, the LLM asked him more about his education level and the focus of the quiz. He told it that he’s a college junior whose quiz will focus on historical context and specified that he wanted help studying William Wordsworth’s poem “The Prelude.”

    The following conversation, which Conatser shared with Inside Higher Ed, ensued:

    Study Mode: Wordsworth described “The Prelude” as “a poem on the growth of my own mind.” What exact subtitle did he eventually attach to the poem that captures this idea?

    (Give it a try, and we’ll check your answer before moving on.)

    Conatser: That’s not so much what I will need to answer. It will be open book, but I will be asked to provide some interpretations to historical context and literary traditions.

    After some more dialogue asking Study Mode to focus on historical context, it generated the following question:

    Study Mode: “Bliss was it in that dawn to be alive …” Based on that word, “bliss,” what general emotion do you think he felt when he first encountered the early French Revolution—excitement, fear or apathy?

    Conatser: That seems like an obvious question

    Study Mode: Understood—let’s turn up the difficulty. In “The Prelude” (Book X), Wordsworth writes of “how in my heart a dreary weight / Fell, and I turned away …”—a moment that marks his break with the revolution he once celebrated.

    Question (one step): Which single, headline-making event in France during 1793 does this passage point to, and why did that event feel like a betrayal of the revolution’s original ideals for many sympathetic British observers?

    Although it took some prodding to get the LLM to ask the type of questions Conatser wanted to see, “I’m still in the driver’s seat with Study Mode,” he said. “That’s where students are going to need to be when they’re using it.”

    Knowing when to push back against Study Mode when it’s not asking the types of questions that will help students master material will be key to helping them get the most out of its ability to foster critical thinking, Conatser said.

    “If someone is going to be totally passive with Study Mode, it may not drive them to where they need to go and they’ll still need to be thinking about how they’re supposed to be studying, what kind of assessment they’re leading up to and what kinds of skills will be more important for that assessment,” he said. “If you don’t know what you don’t know, then you won’t be very effective at using these tools even if they have the types of guardrails that Study Mode has.”

    And if a student isn’t motivated to use Study Mode to its full potential, they can prompt it to generate complete answers, just like a traditional LLM, according to Christian Carmody, a senior researcher at the Research Institute for Learning and Development, who also tested it out this week.

    “For a current events assignment, I prompted Study Mode right away and told it, ‘Before we engage, I do not want your help with this or [to] encourage me to think through this. I do this on my own another time. I really just want the answers,’” Carmody recalled. “It did exactly that.”

    The ability for students to easily manipulate Study Mode could add more pressure to colleges and universities that are facing growing skepticism from students about the value of degrees in the age of AI.

    “Students should be able to think about why learning is valuable to them and why they should be able to engage with material in a way that’s challenging and force deep thinking,” he said. “Until a student has that mindset, I’m not confident that they are going to use this study and learning tool in the way it’s intended to be used.”

    Source link

  • Understanding the Impact of Workplace Incivility in Higher Education – Faculty Focus

    Understanding the Impact of Workplace Incivility in Higher Education – Faculty Focus

    Source link

  • Understanding the Impact of Workplace Incivility in Higher Education – Faculty Focus

    Understanding the Impact of Workplace Incivility in Higher Education – Faculty Focus

    Source link

  • Understanding the commuter student paradox

    Understanding the commuter student paradox

    When we think about commuter students, the first thing that often comes to mind is the difficulties in balancing their studies with the demands of travel.

    We frequently talk about how their lives are more challenging when compared to their peers who live nearer to campus, given the time constraints and added cost pressures they are exposed to.

    However, a closer look reveals a fascinating paradox. Despite the perceived hardships, commuter students who progress with their studies can achieve better outcomes.

    At the University of Lancashire, our ongoing student working lives (SWL) project, which was set up to understand the prevalence and impact of part-time work on the student experience, has started to shed light on the unique experiences of commuter students.

    Our survey considers self-reported responses to questions related to students’ part-time work and university experiences, alongside linked student data to reveal a clearer picture of their non-university lives and their connection with student outcomes.

    Initial data from our latest wave of the SWL project suggests that while commuter students frequently experience tighter schedules due to increased travel commitments and other out-of-class responsibilities, they can often experience better outcomes in their university and non-university lives than their non-commuter peers.

    This data comes from our 2025 student working lives survey which is based on an institutional sample of 484 students, with permission to link data from 136 students.

    Our research extends the recent debate around the choice versus necessity of commuting by repositioning commuters, not as left behind, but as a group of students prepared to meet the challenges laid in front of them, and in some ways, better navigating challenges and excelling in their studies.

    Choose Life

    The survey’s results reinforce the common belief that commuter students have busy lives.

    In combination, commuter students are twice as likely to have caring responsibilities, tend to live in more deprived neighbourhoods (based on IMD quintile) and have a higher work and travel load than their non-commuting counterparts, resulting in less time to spend on study.

    However, questions of necessity or choice can imply that university is the most central thing in their lives, challenging whether the assumptions we hold about commuting students have the correct premise.

    Image of three bar charts outlining workload and travel by commuter status.

    Looking at our latest research, it tells us that commuters are more likely to spend longer working than non-commuter students. While an increased workload highlights the disadvantage some commuters experience, our findings reveal a more complex picture that requires a deeper dive into the lives of this student demographic.

    As such, the commuter students we surveyed achieved higher attainment on average (+2pp) when linking this to university records, despite a lower self-reported rating of belonging compared to their peers.

    Put bluntly, while commuting students feel slightly less attachment to the university and commit less time to study, they go on to receive better marks.

    While this identifies a positive outcome for those students in our study, we should be mindful of wider research suggesting that commuter students are at greater risk of withdrawing, given the acute nature of the challenge experienced. As the study progresses we’ll continue to track further longitudinal outcomes such as continuation, completion and progression over the coming months and years.

    Choose work

    In our study, when understanding experiences of work, commuter students reported that they felt their work was more meaningful, more productive and more fairly paid than their non-commuter peers.

    They also felt better supported at work by their colleagues and managers and felt their current job requirements and responsibilities would enhance future employment prospects. What can we take from this?

    Student population Student Working Lives – % Agree
    Is your work meaningful? Is your work productive? Do you feel fairly paid or rewarded? Do you feel supported by colleagues? Do you feel supported by managers? Do you feel your job enhances your future employment prospects?
    Commuter 43.5% 53.2% 47.2% 42.7% 37.5% 41.1%
    Non-Commuter 40.3% 39.8% 44.5% 38.6% 31.4% 30.9%

     

    It’s important to state that the quality of work outcomes, despite being slightly improved for commuter students, reinforce the findings from our 2024 SWL report and last year’s HEPI Student Academic Experience Survey – students are having to work more to deal with the increased cost of living and on the whole are not experiencing what can be considered as “good” work.

    However, commuter students appear to be negotiating their challenges exceptionally well and are more likely to have a job that supports their future career aspirations.

    While commuter students face unique challenges, are they effectively leveraging their time and resources to excel in their studies, leading to positive outcomes in various aspects of their lives?

    If so, could this add further weight to reframing the argument away from a one-dimensional deficit approach when talking about commuting students?

    We already know that commuter students often have busy lives. This fuller life however, with its many facets, could give them the direction and motivation to succeed in their studies and at work.

    They are not just students, they are employees, caregivers, and active members of their communities. Rather than being a deficit, these experiences can add to their educational success if they can be supported to leverage their experiences.

    Choose commuting

    It’s important for universities to recognise this clear paradox around commuter students. Time restrictions and commitments make things harder for commuter students to designate more time to their studies, in particular independent study that infringes on the family home.

    The benefits of having more time in the workplace, having a family and traveling can enrich their student experience and outcomes.

    By understanding and appreciating these unique experiences, universities can better support commuter and non-commuter students alike.

    At the University of Lancashire, we are feeding these insights into our institutional University of the Future programme. This focuses on curriculum transformation to enhance the student learning experience, the transition to block delivery to consider the pace learning aligns with student lives, and the introduction of a short course lifelong learning model that looks to meet the changing needs of students.

    Commuter students teach us that life’s challenges can also be its greatest strengths. Their ability to balance multiple responsibilities and still be able to achieve positive outcomes is a testament to their ability and determination, attributes the sector is committed to harnessing and employers are keen on developing in the workplace.

    As we continue to explore and understand their experiences in developing our project over the coming months, we can start to challenge assertions and learn valuable lessons that can benefit all students and allow more to “choose life.”

     

    This blog is part of our series on commuter students. Click here to see the other articles in the series.

    Source link

  • Understanding why students cheat and use AI: Insights for meaningful assessments

    Understanding why students cheat and use AI: Insights for meaningful assessments

    Key points:

    • Educators should build a classroom culture that values learning over compliance
    • 5 practical ways to integrate AI into high school science
    • A new era for teachers as AI disrupts instruction
    • For more news on AI and assessments, visit eSN’s Digital Learning hub

    In recent years, the rise of AI technologies and the increasing pressures placed on students have made academic dishonesty a growing concern. Students, especially in the middle and high school years, have more opportunities than ever to cheat using AI tools, such as writing assistants or even text generators. While AI itself isn’t inherently problematic, its use in cheating can hinder students’ learning and development.

    More News from eSchool News

    Many math tasks involve reading, writing, speaking, and listening. These language demands can be particularly challenging for students whose primary language is not English.

    As a career and technical education (CTE) instructor, I see firsthand how career-focused education provides students with the tools to transition smoothly from high school to college and careers.

    As technology trainers, we support teachers’ and administrators’ technology platform needs, training, and support in our district. We do in-class demos and share as much as we can with them, and we also send out a weekly newsletter.

    Math is a fundamental part of K-12 education, but students often face significant challenges in mastering increasingly challenging math concepts.

    Throughout my education, I have always been frustrated by busy work–the kind of homework that felt like an obligatory exercise rather than a meaningful learning experience.

    During the pandemic, thousands of school systems used emergency relief aid to buy laptops, Chromebooks, and other digital devices for students to use in remote learning.

    Education today looks dramatically different from classrooms of just a decade ago. Interactive technologies and multimedia tools now replace traditional textbooks and lectures, creating more dynamic and engaging learning environments.

    There is significant evidence of the connection between physical movement and learning.  Some colleges and universities encourage using standing or treadmill desks while studying, as well as taking breaks to exercise.

    This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters. In recent weeks, we’ve seen federal and state governments issue stop-work orders, withdraw contracts, and terminate…

    English/language arts and science teachers were almost twice as likely to say they use AI tools compared to math teachers or elementary teachers of all subjects, according to a February 2025 survey from the RAND Corporation.

    Want to share a great resource? Let us know at [email protected].

    Source link