Tag: activity

  • We cannot address the AI challenge by acting as though assessment is a standalone activity

    We cannot address the AI challenge by acting as though assessment is a standalone activity

    How to design reliable, valid and fair assessment in an AI-infused world is one of those challenges that feels intractable.

    The scale and extent of the task, it seems, outstrips the available resource to deal with it. In these circumstances it is always worth stepping back to re-frame, perhaps reconceptualise, what the problem is, exactly. Is our framing too narrow? Have we succeeded (yet) in perceiving the most salient aspects of it?

    As an educational development professional, seeking to support institutional policy and learning and teaching practices, I’ve been part of numerous discussions within and beyond my institution. At first, we framed the problem as a threat to the integrity of universities’ power to reliably and fairly award degrees and to certify levels of competence. How do we safeguard this authority and credibly certify learning when the evidence we collect of the learning having taken place can be mimicked so easily? And the act is so undetectable to boot?

    Seen this way the challenge is insurmountable.

    But this framing positions students as devoid of ethical intent, love of learning for its own sake, or capacity for disciplined “digital professionalism”. It also absolves us of the responsibility of providing an education which results in these outcomes. What if we frame the problem instead as a challenge of AI to higher education practices as a whole and not just to assessment? We know the use of AI in HE ranges widely, but we are only just beginning to comprehend the extent to which it redraws the basis of our educative relationship with students.

    Rooted in subject knowledge

    I’m finding that some very old ideas about what constitutes teaching expertise and how students learn are illuminating: the very questions that expert teachers have always asked themselves are in fact newly pertinent as we (re)design education in an AI world. This challenge of AI is not as novel as it first appeared.

    Fundamentally, we are responsible for curriculum design which builds students’ ethical, intellectual and creative development over the course of a whole programme in ways that are relevant to society and future employment. Academic subject content knowledge is at the core of this endeavour and it is this which is the most unnerving part of the challenge presented by AI. I have lost count of the number of times colleagues have said, “I am an expert in [insert relevant subject area], I did not train for this” – where “this” is AI.

    The most resource-intensive need that we have is for an expansion of subject content knowledge: every academic who teaches now needs a subject content knowledge which encompasses a consideration of the interplay between their field of expertise and AI, and specifically the use of AI in learning and professional practice in their field.

    It is only on the basis of this enhanced subject content knowledge that we can then go on to ask: what preconceptions are my students bringing to this subject matter? What prior experience and views do they have about AI use? What precisely will be my educational purpose? How will students engage with this through a newly adjusted repertoire of curriculum and teaching strategies? The task of HE remains a matter of comprehending a new reality and then designing for the comprehension of others. Perhaps the difference now is that the journey of comprehension is even more collaborative and even less finite that it once would have seemed.

    Beyond futile gestures

    All this is not to say that the specific challenge of ensuring that assessment is valid disappears. A universal need for all learners is to develop a capacity for qualitative judgement and to learn to seek, interpret and critically respond to feedback about their own work. AI may well assist in some of these processes, but developing students’ agency, competence and ethical use of it is arguably a prerequisite. In response to this conundrum, some colleagues suggest a return to the in-person examination – even as a baseline to establish in a valid way levels of students’ understanding.

    Let’s leave aside for a moment the argument about the extent to which in-person exams were ever a valid way of assessing much of what we claimed. Rather than focusing on how we can verify students’ learning, let’s emphasise more strongly the need for students themselves to be in touch with the extent and depth of their own understanding, independently of AI.

    What if we reimagined the in-person high stakes summative examination as a low-stakes diagnostic event in which students test and re-test their understanding, capacity to articulate new concepts or design novel solutions? What if such events became periodic collaborative learning reviews? And yes, also a baseline, which assists us all – including students, who after all also have a vested interest – in ensuring that our assessments are valid.

    Treating the challenge of AI as though assessment stands alone from the rest of higher education is too narrow a frame – one that consigns us to a kind of futile authoritarianism which renders assessment practices performative and irrelevant to our and our students’ reality.

    There is much work to do in expanding subject content knowledge and in reimagining our curricula and reconfiguring assessment design at programme level such that it redraws our educative relationship with students. Assessment more than ever has to become a common endeavour rather than something we “provide” to students. A focus on how we conceptualise the trajectory of students’ intellectual, ethical and creative development is inescapable if we are serious about tackling this challenge in meaningful way.

    Source link

  • Subject-level insights on graduate activity

    Subject-level insights on graduate activity

    We know a lot about what graduates earn.

    Earnings data—especially at subject level—has become key to debates about the value of higher education.

    But we know far less about how graduates themselves experience their early careers. Until now, subject-level data on graduate job quality—how meaningful their work is, how well it aligns with their goals, and whether it uses their university-acquired skills—has been missing from the policy debate.

    My new study (co-authored with Fiona Christie and Tracy Scurry and published in Studies in Higher Education) aims to fill this gap. Drawing on responses from the 2018-19 graduation cohort in the national Graduate Outcomes survey, we provide the first nationally representative, subject-level analysis of these subjective graduate outcomes.

    What we find has important implications for how we define successful outcomes from higher education—and how we support students in making informed choices about what subject to study.

    What graduates tell us

    The Graduate Outcomes survey includes a set of questions—introduced by HESA in 2017—designed to capture core dimensions of graduate job quality. Respondents are asked (around 15 months after graduation) whether they:

    • find their work meaningful
    • feel it aligns with their future plans
    • believe they are using the skills acquired at university

    These indicators were developed in part to address the over-reliance on income as a measure of graduate success. They reflect a growing international awareness that economic outcomes alone offer a limited picture of the value of education—in line with the OECD’s Beyond GDP agenda, the ILO’s emphasis on decent work, and the UK’s Taylor Review focus on job quality.

    Subject-level insights

    Our analysis shows that most UK graduates report positive early-career experiences, regardless of subject. Across the sample, 86 per cent said their work felt meaningful, 78 per cent felt on track with their careers, and 66 per cent reported using their degree-level skills.

    These patterns generally hold across disciplines, though clear differences emerge. The chart below shows the raw, unadjusted proportion of graduates who report positive outcomes. Graduates from vocational fields—such as medicine, subjects allied to medicine, veterinary science, and education—tend to report particularly strong outcomes. For instance, medicine and dentistry graduates were 12 percentage points more likely than average to say their work was meaningful, and over 30 points more likely to report using the skills they acquired at university.

    However, the results also challenge the narrative that generalist or academic degrees are inherently low value. As you can see, most subject areas—including history, languages, and the creative arts, often targeted in these debates—show strong subjective outcomes across the three dimensions. Only one field, history and philosophy, fell slightly below the 50 per cent threshold on the skills utilisation measure. But even here, graduates still reported relatively high levels of meaningful work and career alignment.

    Once we adjusted for background characteristics—such as social class, gender, prior attainment, and institutional differences—many of the remaining gaps between vocational and generalist subjects narrowed and were no longer statistically significant.

    This chart shows the raw proportion of 2018-19 graduates who agree or strongly agree that their current work is meaningful, on track and using skills, by field of study (N = 67,722)

    Employment in a highly skilled occupation—used by the Office for Students (OfS) as a key regulatory benchmark—was not a reliable predictor of positive outcomes. This finding aligns with previous HESA research and raises important questions about the appropriateness of using occupational classification as a proxy for graduate success at the subject level.

    Rethinking what we measure and value

    These insights arrive at a time when the OfS is placing greater emphasis on regulating equality of opportunity and ensuring the provision of “full, frank, and fair information” to students. If students are to make informed choices, they need access to subject-level data that reflects more than salary, occupational status, or postgraduate progression. Our findings suggest that subjective outcomes—how graduates feel about their work—should be part of that conversation.

    For policymakers, our findings highlight the risks of relying on blunt outcome metrics—particularly earnings and occupational classifications—as indicators of course value. Our data show that graduates from a wide range of subjects—including those often labelled as “low value”—frequently go on to report meaningful work shortly after graduation that aligns with their future plans and makes use of the skills they developed at university.

    And while job quality matters, universities should not be held solely accountable for outcomes shaped by employers and labour market structures. Metrics and league tables that tie institutional performance too closely to job quality risk misrepresenting what higher education can influence. A more productive step would be to expand the Graduate Outcomes survey to include a wider range of job quality indicators—such as autonomy, flexibility, and progression—offering a fuller picture of early career graduate success.

    A richer understanding

    Our work offers the first nationally representative, subject-level insight into how UK graduates evaluate job quality in the early stages of their careers. In doing so, it adds a missing piece to the value debate—one grounded not just in earnings or employment status, but in graduates’ own sense of meaning, purpose, and skill use.

    If we are serious about understanding what graduates take from their university experience, it’s time to move beyond salary alone—and to listen more carefully to what graduates themselves are telling us.

    DK notes: Though the analysis that Brophy et al have done (employing listwise deletion, examining UK domiciled first degree graduates only) enhances our understanding of undergraduate progression and goes beyond what is publicly available, I couldn’t resist plotting the HESA public data in a similar way, as it may be of interest to readers:

    [Full screen]

    Source link

  • Lawsuit challenges Trump ICE raid policy, citing LAUSD activity

    Lawsuit challenges Trump ICE raid policy, citing LAUSD activity

    This audio is auto-generated. Please let us know if you have feedback.

    The Trump administration’s Immigration and Customs Enforcement policy allowing ICE raids on school grounds and other sensitive locations was challenged in a lawsuit filed this week on behalf of an Oregon-based Latinx organization and faith groups from other states. 

    The lawsuit cites ICE activity at two Los Angeles elementary schools last month, as well as parents’ fears of sending their children to school in other locations across the country. 

    “Teachers cited attendance rates have dropped in half and school administrators saw an influx of parents picking their children up from school in the middle of the day after hearing reports that immigration officials were in the area,” said the lawsuit filed April 28 by the Justice Action Center and the Innovation Law Lab. It was filed in the U.S. District Court for the District Court of Oregon’s Eugene Division.

    The two organizations filed on behalf of Oregon’s farmworker union Pineros y Campesinos Unidos del Noroeste, whose members say they are afraid to send their children to school,” per the draft complaint. The farmworker union’s members, especially those who are mothers, say their livelihood depends on sending their children to school during the day while they work. 

    “They now must choose between facing the risk of immigration detention or staying at home with their children and forfeiting their income,” the lawsuit said. One of the members of the union said her children were “afraid of ICE showing up and separating their family.” 

    The lawsuit challenges a Department of Homeland Security directive, issued one day after President Donald Trump’s inauguration, that undid three decades of DHS policy that prevented ICE from raiding sensitive locations like schools, hospitals and churches. 

    “Criminals will no longer be able to hide in America’s schools and churches to avoid arrest,” a DHS spokesperson said in a January statement on the order. “The Trump Administration will not tie the hands of our brave law enforcement, and instead trusts them to use common sense.”

    When asked for comment on the lawsuit, an ICE spokesperson said the agency does not comment on pending or ongoing litigation. 

    Monday’s lawsuit and others filed against the directive say the change in policy is impacting students’ learning and districts’ ability to carry out their jobs. 

    A lawsuit filed in February by Denver Public Schools said the DHS order “gives federal agents virtually unchecked authority to enforce immigration laws in formerly protected areas, including schools.” It sought a temporary restraining order prohibiting ICE and Customs and Border Protection from enforcing the policy. 

    According to the American Immigration Council, over 4 million U.S. citizen children under 18 years of age lived with at least one undocumented parent as of 2018. A 2010 study cited by the council found that immigration-related parental arrests led to children experiencing at least four adverse behavioral changes in the six months following the incidents.

    Another study cited by the organization, conducted in 2020, found that school districts in communities with a large number of deportations saw worsened educational outcomes for Latino students.

    Source link

  • Exploring the explosion in franchise and partnership activity

    Exploring the explosion in franchise and partnership activity

    There’s a clear need for more regulatory oversight of franchise and partnership teaching arrangements, but – as regulators are finding – there’s no easy way to track which students are being registered, taught, or physically located at which provider.

    Knowing where students study feels like a straightforward matter – indeed “where do HE students study” is one of the top level questions posed in HESA’s Student open data collection. If you click on that, it takes you to an up-to-date (2022-23 academic year) summary of student numbers by registering provider.

    But as we’ve learned from concerns raised by the Office for Students, the Student Loans Company, the National Audit Office, and (frankly) Wonkhe there is a bit more to it than that. And it is not currently possible to unpick this to show the number of students at each provider – for any given value of “at” other than registering – using public data. But we can do it for the number of courses.

    The forgotten open data set

    Yes – I’ve started the year abusing the Unistats open data release (it’s the only open data release that lets you find details of courses was the clue). And you sort of, kind of, can unpick some of these relationships using it. After a fashion.

    It is worth unpacking our terms a bit:

    • A student’s registration provider is the provider that returns that student as a part of their official data returns. If the registration provider has degree awarding powers, this is generally the provider that awards the qualification the student is working towards
    • A student’s teaching provider is the provider where a student is actually taught – in the instances where this is different to the registering provider this usually happens via a partnership arrangement of some sort.
    • A student’s location provider is the actual place a student is taught – usually, this is the same as a teaching provider, but not always – for example a “university centre” based at an FE college counts as the university in question doing the teaching, but the location would be the FE provider that hosts the centre.
    • We’ve also got to deal with the idea of an awarding provider, the place that actually awards the degree the student is working towards. In the main this is the same as the registering provider, but where the registering provider can’t award the degree in question it will be someone else by arrangement.

    How does this appear in Unistats? You are probably familiar with the notion of the UK Provider Reference Number (UKPRN): a unique identifier for educational settings. In the unistats data we get something called PUBUKPRN, which identifies where a course is primarily taught. We get something called UKPRN, which identifies where students on a given course are registered. And we get LOCUKPRN, which identifies the location a student is taught at – where this a location with its own UKPRN that is not the same as the PUBUKPRN.

    Limits to sector growth visualisation

    What’s missing – we don’t get a UKPRN for the awarding body. Not in public data. It is collected (OTHERINST) and used on the Discover Uni website but it is not published for me to mess with. Not yet, anyway.

    So what I can show you is the number of courses at each combination of registering, teaching, and location provider. This shows instances where students may be registered at one provider but taught at another (your classic partnership arrangement), and the evolving practice of declaring unilateral branch campuses (where students are registered and taught at one provider, but located at a different one).

    That latter one explains the explosion of London “campuses” that area really an independent provider (that may cater to multiple institutions in a similar way). The whole thing gives some indication of where a given provider is involved in franchise/partnership activity – but only where this is shown on unistats.

    What I found

    First up – sorted by registering provider. You can filter by registering provider if you don’t want the whole list (whyever not?), and I’ve included a wildcard filter for course titles. This is instead of a subject filter – each course in unistats is meant to have a subject associated with it, but this tends not to happen for the kinds of courses we are interested in here:

    [Full screen]

    And the same thing, sorted (and searchable) by teaching provider:

    [Full screen]

    Many limitations, but a space to keep asking

    The limitations here are huge. What we really want is the number of students registered on each course – we can get this at various levels of aggregation but not reliably at a single course, single cohort level. HESA’s policy of rounding and aggregating to avoid identifying individual students (and of course the historic nature of much of the data presented on unistats) means that most of the information in the data set (entry qualification), NSS, graduate destinations is combined across multiple years and numerous related subjects.

    In the usual unistats fashion “courses” are a unique combination of qualification aim, title, and mode for each location. So the handful of remaining providers that do defined “joint” degrees (two or more subjects) look like they are spectacularly busy. And, as always, the overall quality of data isn’t brilliant so there will be stuff that doesn’t look right (pro tip: yes tell me, but also tell HESA).

    Here then, is a partial explanation as to why regulators and others have been slow to respond to the growth in franchise, partnership, and other joint provisions: almost by definition the novel things providers are up to don’t show up in data collection. That’s kind of the point.

    But is a simple list of available courses, where they are taught, what (and whose) awards they lead to, what subjects they cover, and how many students are on each too much to ask? It appears so.

    Source link