Category: Featured

  • La Trobe University uncovers $10m in underpayments to staff

    La Trobe University uncovers $10m in underpayments to staff

    Anna Booth said the university has demonstrated a strong commitment to rectifying noncompliance issues. Picture: Martin Ollman

    La Trobe University will fork out more than $10m to cover the underpayments of 6700 staff, after investigations found the institution had failed to properly pay employees over a seven-year period.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Mark Scott says international students are “a down payment on the future”

    Mark Scott says international students are “a down payment on the future”

    Mark Scott was a major advocate for no overseas student cap last year. Picture: Jane Dempster

    University of Sydney vice-chancellor Mark Scott reaffirmed that all international students are welcome at his university during a meeting of student unions on Wednesday.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • How is artificial intelligence actually being used in higher education?

    How is artificial intelligence actually being used in higher education?

    With a wide range of applications, including streamlining administrative tasks and tailoring learning experiences, AI is being used in innovative ways to enhance higher education.

    Course design and content preparation

    AI tools are changing the way academic staff approach course design and content preparation. By leveraging AI, lecturers can quickly generate comprehensive plans, create engaging sessions, and develop quizzes and assignments.

    For instance, tools like Blackboard Ultra can create detailed course plans and provide suggestions for content organisation and course layout. They can produce course materials in a fraction of the time it would traditionally take and suggest interactive elements that could increase student engagement.

    AI tools excel at aligning resources with learning outcomes and institutional policies. This not only saves time but also allows lecturers to focus more on delivering high-quality instruction and engaging with students.

    Enhancing learning experience

    AI and virtual reality (VR) scenarios and gamified environments are offering students unique, engaging learning experiences that go beyond traditional lectures. Tools like Bodyswaps use VR to simulate realistic scenarios for practicing soft and technical skills safely. These immersive and gamified environments enhance learning by engaging students in risk-free real-world challenges and provide instant feedback, helping them learn and adjust more effectively.

    Self-tailored learning

    AI also plays a role in supporting students to tailor learning materials to meet their individual and diverse needs. Tools like Jamworks can enhance student interaction with lecture content by converting recordings into organised notes and interactive study materials, such as flashcards.

    Similarly, Notebook LLM offers flexibility in how students engage with their courses by enabling them to generate content in their preferred form such as briefing documents, podcasts, or taking a more conversational approach. These tools empower students to take control of their learning processes, making education more aligned with their individual learning habits and preferences.

    Feedback and assessment

    Feedback and assessment is the most frequently referenced area when discussing how reductions in workload could be achieved with AI. Marking tools like Graide, Keath.ai, and Learnwise are changing this process by accelerating the marking phase. These tools leverage AI to deliver consistent and tailored feedback, providing students with clear, constructive insights to enhance their academic work. However, the adoption of AI in marking raises valid ethical concerns about its acceptability such as the lack of human judgement and whether AI can mark consistently and fairly.

    Supporting accessibility

    AI can play a crucial role in enhancing accessibility within educational environments, ensuring that learning materials are inclusive and accessible to all students. By integrating AI-driven tools such as automated captioning, and text-to-speech applications, universities can significantly improve the accessibility of digital resources.

    AI’s capability to tailor learning materials is particularly beneficial for students with diverse educational needs. It can reformat text, translate languages, and simplify complex information to make it more digestible. This ensures that all students, regardless of their learning abilities or language proficiency, have equal opportunities to access and understand educational content.

    Despite the benefits, the use of AI tools like Grammarly raises concerns about academic integrity. These tools have the potential to enhance or even alter students’ original work, which may lead to questions about the authenticity of their submissions. This issue highlights the need for clear guidelines and ethical considerations in the use of AI to support academic work without compromising integrity.

    Another significant issue is equity of access to these tools. Many of the most effective AI-driven accessibility tools are premium services, which may not be affordable for all students, potentially widening the digital divide.

    Student support – chatbots

    AI chatbots are increasingly recognised as valuable tools in the tertiary education sector, streamlining student support and significantly reducing staff workload. These increasingly sophisticated systems are adept at managing a wide array of student queries, from routine administrative questions to more detailed academic support, thereby allowing human resources to focus on tasks requiring more nuanced and personal interactions. They can be customised to meet the specific needs of a university, ensuring that they provide accurate and relevant information to students.

    Chatbots such as LearnWise are designed to enhance student interactions by providing more tailored and contextually aware responses. For instance, on a university’s website, if a student expresses interest in gaming, they can suggest relevant courses, highlight the available facilities and include extra curriculum activities available, integrating seamlessly with the student’s interests and academic goals. This level of tailoring enhances the interaction quality and improves the student experience.

    Administrative efficiency

    AI is positively impacting the way administrative tasks are handled within educational institutions, changing the way everyday processes are managed. By automating routine and time-consuming tasks, AI technologies can alleviate the administrative load on staff, allowing them to dedicate more time to strategic and student-focused activities.

    AI tools such as Coplot and Gemini can help staff draft, organise, and prioritise emails. These tools can suggest responses based on the content received, check the tone of emails and manage scheduling by integrating with calendar apps, and remind lecturers of pending tasks or follow-ups, enhancing efficiency within the institution.

    Staff frequently deal with extensive documentation, from student reports to research papers and institutional policies. AI tools can assist in checking, proofreading and summarising papers and reports, and can help with data analysis, generating insights, graphs and graphics to help make data more easily digestible.

    How is AI being used in your institution?

    At Jisc we are collating practical case studies to create a comprehensive overview of how AI is being used across tertiary education. This includes a wide range of examples supporting the effective integration of AI into teaching and administration which will be used to highlight best practice, support those just getting started with the use of AI, overcome challenges being faced across the sector and to highlight the opportunities available to all.

    We want to hear how AI is being used at your organisation, from enhancing everyday tasks to complex and creative use cases. You can explore these resources and find out how to contribute by visiting the Jisc AI Resource Hub.

    For more information around the use of digital and AI in tertiary education, sign up to receive on-demand access to key sessions from Jisc’s flagship teaching and learning event – Digifest running 11–12 March.

    Source link

  • Why do we punish low-income students for entering education?

    Why do we punish low-income students for entering education?

    Much has been written about the financial challenges many students face in going to university, and the fact that maintenance loans fall quite some way short of covering the cost of living for students.

    Much has also been written about the national trend of mature students numbers coming to university being in decline, with particular implications for certain sectors such as healthcare, where we are struggling to meet workforce need.

    These two areas of concern are quite likely related and linked to what we believe is a fundamentally unfair and regressive policy which impacts people who are in receipt of Universal Credit.

    Under the current Universal Credit (UC) system, for people who are in work, UC is reduced by 55p for every £1 earned as income.

    However, if you are entitled to receive Universal Credit and decide to go to university, for every £1 you receive in maintenance loan funding, your UC entitlement is reduced by £1 – and not by 55p as is the case for earned income.

    Make it make sense

    On the face of it, this seems highly inequitable – why should income derived from a student loan (which will, of course, need to be repaid with interest) be treated more harshly than earned income?

    Another reaction to this approach might be to ask,  “Why wouldn’t students who are eligible to receive UC simply not draw down their maintenance loan at all?”.

    Unfortunately, this option is not open to those students, because the rules around reductions to UC make clear that the pound-for-pound deductions from UC are based upon the maximum maintenance loan for which you are eligible, regardless of whether you actually take the loan.

    It is worth highlighting that, in general, full time university students are not eligible to claim Universal Credit. However, exceptions do apply, such as if you are under 21 and do not have parental support, or if you are responsible for the care of a child (the full list of eligibility criteria can be found here). In other words, students who we know are more likely to need additional support to be successful in higher education.

    The Child Poverty Action Group have dedicated information for students who are entitled to claim UC, to explain the impact of having access to a maintenance loan on their UC payments.

    In their worked example, a single mother of a 3yr old child, living in private rented accommodation, could have UC payments of £1399.60 reduced to £475.71 per month as a result of going into full time higher education and having access to a maintenance loan.

    In other words, this mother would be taking on a personal loan debt of well over £900 per month – on top of the cost of tuition fees – which would otherwise have been paid as UC if she had not decided to access education.

    We believe that this scenario may be without precedent in terms of our UC and wider benefit system, in that we know of no other situation in which someone who is entitled to claim benefits would be told that they need to take out a personal loan to replace their benefits entitlement.

    In a recent ministerial question on this issue, the government explicitly confirmed that:

    …successive Governments have held the principle that the benefit system does not normally support full-time students. Rather, they are supported by the educational maintenance system.

    This principle may have been fine when maintenance support was distributed as a grant rather than a loan, but we would argue that there is something deeply regressive about asking students from backgrounds who are already less likely to access education to forego benefit support to which they would be otherwise fully entitled.

    Breaking down barriers

    The current government has set out an ambitious set of missions to “Build a Better Britain”, which includes a mission to “Break down the barriers to opportunity at every stage”.

    We would argue strongly that the impact of having access to a maintenance loan on UC payments is an unfair and unnecessary barrier to students who wish to access higher education, and may well be a significant factor in why some mature learners are seeing university study as a less attractive option.

    Finding and fixing barriers of this kind – which could be easily addressed by allowing students who are eligible to access UC to continue doing so – would be entirely consistent with this government’s mission.

    Source link

  • A proper review of student maintenance is now long overdue

    A proper review of student maintenance is now long overdue

    Elsewhere on the site, Esther Stimpson, Dave Phoenix and Tony Moss explain an obvious injustice.

    Universal Credit (UC) reduces by 55p for every £1 earned as income – unless you’re one of the few students entitled to UC, where instead it is reduced by £1 for every £1 you are loaned for maintenance.

    To be fair, when Universal Credit was introduced, the income disregards in the old systems that recognised that students spend out on books, equipment and travel were rolled into a single figure of £110 a month.

    Taper rates were introduced to prevent “benefit traps” where increasing earnings led to disproportionately high reductions in support – and have gone from 65p initially, then to 67p, and now to 55p.

    But for students, there’s never been a taper rate – and that £110 for the costs of books, equipment and travel hasn’t been uprated in over 13 years. Lifelong learning my eye.

    The olden days

    The student finance system in England is full of these problems – probably the most vexing of which is the parental earnings threshold over which the system expects parents to top up to the maximum.

    It’s been set at £25,000 since 2008 – despite significant growth in nominal earnings across the economy since then. IFS says that if the threshold had been uprated since 2008, it would now be around £36,500 (46 per cent higher) in 2023/24.

    That explains how John Denham came to estimate that a third of English domiciled students would get the maximum maintenance package back in 2007. We’re now down to about 1 in 5.

    Add in the fact that the maximums available have failed to increase by inflation – especially during the post-pandemic cost of living spikes – and there’s now a huge problem.

    It’s a particular issue for what politics used to call a “squeezed middle” – the parents of students whose families would have been earning £25,000 in 2007 now have £4,000 more a year to find in today’s money.

    And thanks to the increases in the minimum wage, the problem is set to grow again – when the Student Loans Company comes to assess the income of a single parent family in full time (40 hours) work, given that’s over the £25,000 threshold, it will soon calculate that even that family has to make a parental contribution to the loan too.

    It’s not even as if the means test actually works, either.

    Principles

    How much should students get? Over twenty years ago now, the higher education minister charged by Tony Blair with getting “top-up fees” through Parliament established two policy principles on maintenance.

    The first was Charles Clarke’s aspiration to move to a position where the maintenance loan was no-longer means tested, and made available in full to all full-time undergraduates – so that students would be treated as financially independent from the age of 18.

    That was never achieved – unless you count its revival and subsequent implementation in the Diamond review in Wales some twelve years later.

    Having just received results from the Student Income and Expenditure Survey (SIES) the previous December, Clarke’s second big announcement was that from September 2006, maintenance loans would be raised to the median level of students’ basic living costs –

    The principle of the decision will ensure that students have enough money to meet their basic living costs while studying.

    If we look at the last DfE-commissioned Student Income and Expenditure Survey – run in 2021 for the first time in eight years – median living and participation costs for full-time students were £15,561, so would be £18,888 today if we used CPI as a measure.

    The maximum maintenance loan today is £10,227.

    The third policy principle that tends to emerge from student finance reviews – in Scotland, Wales and even in the Augar review of Post-18 review of education and funding – is that the value of student financial support should be linked somehow to the minimum wage.

    Augar argued that students ought to expect to combine earning with learning – suggesting that full-time students should expect to be unable to work for 37.5 hours a week during term time, and should therefore be loaned the difference (albeit with a parental contribution on a means test and assuming that PT work is possible for all students on all courses, which it plainly isn’t).

    As of September, the National Living Wage at 37.5 hours a week x 30 weeks will be £13,376 – some £2,832 more than most students will be able to borrow, and more even than students in London will be able to borrow.

    And because the Treasury centrally manages the outlay and subsidies for student loans in the devolved nations for overall “equivalence” on costs, both Scotland and Wales have now had to abandon their minimum wage anchors too.

    Diversity

    Augar thought that someone ought to look at London weighting – having not managed to do so in the several years that his project ran for, the review called London a “subject worthy of further enquiry”.

    Given that the last government failed to even respond to his chapter on maintenance, it means that no such further work has been carried out – leaving the uprating of the basic for London (+25 per cent) and the downrating for those living at home (-20 per cent) at the same level as they were in the Education (Student Loans) Regulations 1997.

    Augar also thought student parents worthy of further work – presumably not the subject of actual work because it was DfE officials, not those from the DWP, who supported his review. Why on earth, wonder policymakers, are people putting off having kids, causing a coming crisis in the working age/pensionable age ratio? It’s a mystery.

    Commuters, too. The review supported the principle that the away/home differential should be based on the different cost of living for those living at home but it “suggested a detailed study of the characteristics and in-study experience of commuter students and how to support them better.” It’s never been done. Our series would be a good place to start.

    Things are worse for postgraduates, of course. Not only does a loan originally designed to cover both now go nowhere near the cost of tuition and maintenance, the annually updated memo from the DWP (buried somewhere in the secondary legislation) on how PG loans should be treated viz a vis the benefits system still pretends that thirty per cent of the loan should be treated as maintenance “income” for the purposes of calculating benefits, and the rest considered tuition spend.

    (Just to put that into context – thirty per cent of the current master’s loan of £12,471 is £3,741. 90 credits is supposed to represent 1800 notional hours that a student is spending on studying rather than participating in the labour market. The maintenance component is worth £2.08 an hour – ie the loan is £16,851 short on maintenance alone for a year which by definition involves less vacation time).

    Carer’s Allowance is available if you provide at least 35 hours of care a week – as long as you’re not a full-time student. Free childcare for children under fives? Only if you’re not a full-time student. Pretty much all of the support available from both central government and local authorities during Covid? Full-time students excluded.

    When ministers outside of DfE give answers on any of this, they tell MPs that “the principle” is that the benefits system does not normally support full-time students, and that instead, “they are supported by the educational maintenance system.” What DWP minister Stephen Timms really means, of course, is thank god our department doesn’t have to find money for them too – a problem that will only get worse throughout the spending review.

    Whose problem?

    Back in 2004, something else was introduced in the package of concessions designed to get top-up fees through.

    As was also the case later in 2012, the government naively thought that £3,000 fees would act as an upper limit rather than a target – so Clarke announced that he would maintain fee remission at around £1,200, raise the new “Higher Education grant” for those from poorer backgrounds to £1,500 a year, and would require universities to offer bursaries to students from the poorest backgrounds to make up the difference.

    It was the thin end of a wedge. By the end of the decade, the nudging and cajoling of universities to take some of their additional “tuition” fee income and give it back to students by way of fee waivers, bursaries or scholarships had resulted in almost £200m million being spent on financial support students from lower income and other underrepresented groups – with more than 70 per cent of that figure spent on those with a household income of less than £17,910. By 2020-21 – the last time OfS bothered publishing the spend – that had doubled to £406m.

    It may not last. The principle is pretty much gone and the funding is in freefall. When I looked at this last year (via an FOI request), cash help per student had almost halved in five years – and in emerging Access and Participation Plans, providers were cutting financial support in the name of “better targeting”.

    You can’t blame them. Budgets are tight, the idea of redistributing “additional” fee income a lost concept, and the “student premium” funding given to universities to underpin that sort of support has been tumbling in value for years – from, for example, £174 per disabled student in 2018/19 to just £129 now.

    All while the responsibility for the costs to enable disabled students to access their education glide more and more onto university budgets – first via a big cut in the last decade, and now via slices of salami that see pressure piled on to staff who get the blame, but don’t have the funding to claim any credit.

    Pound in the pocket

    What about comparisons? By European standards, our core system of maintenance looks fairly generous – in this comparison of monthly student incomes via Eurostudent, for example, we’re not far off top out of 20 countries:

    But those figures in Euros are deceptive. Our students – both UG and PG – spend fewer years as full time students than in almost every other country. Students’ costs are distorted by a high proportion studying away from home – something that subject and campus rationalisation will exacerbate rather than relieve.

    And anyway, look at what happens to the chart when we adjust for purchasing power:

    How are students doing financially three years on? The Student Income and Expenditure Survey (SIES) has not been recommissioned, so even if we wanted to, we’d have no data to supply to the above exercise. The Labour Force Survey fails to capture students in (any) halls, and collects some data through parents. Households Below Average Income – the key dataset on poverty – counts tuition fee loans as income, despite my annual email to officials pointing out the preposterousness of that. How are students doing financially? We don’t really know.

    And on costs, the problems persist too. There’s no reliable data on the cost of student accommodation – although what there is always suggests that it is rising faster than headline rates of inflation. The basket of goods in CPI and RPI can’t be the same as for a typical student – but aside from individual institutional studies, the work has never been done.

    Even on things like the evaluation of the bus fare cap, published recently by the Department for Transport, students weren’t set up as a flag by the department – so are unlikely to be a focus of what’s left from that pot after the spending review. See also health, housing, work – students are always DfE’s problem.

    Student discounts are all but dead – too many people see students as people to profit from, rather than subsidise. No government department is willing to look at housing – passed between MHCLG and DfE like a hot potato while those they’d love to devolve to “other” students as economic units or nuisances, but never citizens.

    The business department is barely aware that students work part-time, and the Home Office seems to think that international students will be able to live on the figure that nobody thinks home students can live on. DfE must have done work, you suppose they suppose.

    In health, we pretend that student nurses and midwives are “supernumerary” to get them to pay us (!) to prop up our creaking NHS. And that split between departments, where DfE loans money to students for four years max, still means that we expect medical students in their final two years – the most demanding in terms of academic content and travelling full time to placements – to live on £7,500 a year. Thank god, in a way, that so few poor kids get in.

    It’s not even like we warn them. UK higher education is a £43.9 billion sector educating almost 3m students a year, professes to be interested in access and participation, and says it offers a “world-class student” experience. And yet it can’t even get its act together to work out and tell applicants how much it costs to participate in it – even in one of the most expensive cities in the world.

    Because reasons

    Why are we like this? It’s partly about statecraft. There was an obvious split between education and other departments when students were all young, middle class and carefree, and devolution gave the split a sharper edge – education funding (devolved) and benefits (reserved).

    It’s partly about participation. It’s very tempting for all involved to only judge student financial support on whether it appears to be causing (or at least correlates with) overall enrolment, participation and completion – missing all of the impacts on the quality of that participation in the process.

    Do we know what the long-term impacts are on our human capital of “full-time” students being increasingly anxious, lonely, hungry, burdened and, well, part-time? We don’t.

    Efficiency in provider budgets is about getting more students to share cheaper things – management, space, operating costs and even academics. Efficiency for students doesn’t work like that – it just means spending less and less time on being a student.

    The participation issue is also about the principal – we’ve now spent decades paying for participation expansion ambitions by pushing more and more of the long-run run cost onto graduates – so much so that there’s now little subsidy in the system left.

    And now that the cost of borrowing the money to lend to students is through the roof, increases in the outlay look increasingly impossible.

    Lifelong moaning

    But something will have to give soon. Some five years after Boris Johnson gave a speech at Exeter College announcing his new Lifetime Skills Guarantee, there’s still no news on maintenance – only ever a vague “maintenance loan to cover living costs for courses with in-person attendance” to accompany the detailed tables of credits that get chunked down from the FT £9,535.

    The LLE was partly a product of Augar (more on that on Wonk Corner) – who said that maintenance support should be reserved for those studying at a minimum level of intensity – 25 per cent (15 ECTS a year), and then scaled by credit.

    But think about that for a moment, setting aside that increasingly arbitrary distance learning differential. Why would a student studying for 45 credits only get 3/4 of an already inadequate loan? Will students studying on one of those accelerated degrees get 1.5 x the loan?

    The centrality of credit to the LLE – and its potential use in determining the level of student financial support for their living and participation costs – is fascinating partly because of the way in which a row between the UK and other member states played out back in 2008.

    When ECTS was being developed, we (ie the UK) argued that the concept focused too heavily on workload as the primary factor for assigning credits. We said that credits should be awarded based on the actual achievement of learning outcomes, rather than simply the estimated workload.

    That was partly because the UK’s estimate at the time of 1,200 notional learning hours (derived from an estimate of 40 hours’ notional learner effort a week, multiplied by 30 weeks) was the lowest in Europe, and much lower than the 1,500-1,800 hours that everyone else in Europe was estimating.

    Annex D of 2006’s Proposals for national arrangements for the use of academic credit in higher education in England: The Final report of the Burgess Group put that down to the UK having shorter teaching terms and not clocking what students do in their breaks:

    It could be argued that considerably more learner effort takes place during the extended vacations and that this is not taken into account in the total NLH for an academic year.

    Those were the days.

    In the end an EU fudge was found allowing the UK to retain its 20 notional hours – with a stress that “how this is applied to a range of learning experiences at a modular or course level will differ according to types of delivery, subject content and student cohorts” and the inclusion of “time spent in class, directed learning, independent study and assessment.”

    A bit like with fees and efficiency, if in the mid noughties it was more likely that students were loaned enough to live on, were posh, had plenty of spare time and had carefree summers, that inherent flex meant that a student whose credit was more demanding than the notional hours could eat into their free time to achieve the learning outcomes.

    But once you’ve got a much more diverse cohort of students who are much more likely to need to be earning while learning, you can’t really afford to be as flexible – partly because if you end up with a student whose characteristics and workload demand, say, 50 hours a week, and a funding system that demands 35 hours’ work a week, once you sleep for 8 hours a night you’re left with less than 4 hours a day to do literally anything else at all.

    Think of it this way. If it turns out that in order to access the full maintenance loan, you have to enrol onto 60 ECTS a year (the current “full-time” position), we are saying to students that you must enrol onto credits theoretically totalling at the very very least 1,200 hours of work a year. We then loan them – as a maximum – £8.52 an hour (outside London, away from home). No wonder they’re using AI – they need to eat.

    If it then turns out that you end up needing to repeat a module or even a year, the LLE will be saying “we’ve based the whole thing on dodgy averages from two decades ago – and if you need to take longer or need more goes at it, you’ll end up in more debt, and lose some of your 4 years’ entitlement in the process”. Charming.

    A credit system whose design estimated notional learning hours around students two decades ago, assumed that students have the luxury of doing lots of stuff over the summer, and fessed up that it’s an unreliable way of measuring workload is not in any world a sensible way to work out how much maintenance and participation cost support to loan to a student.

    Pretty much every other European country – if they operate loans, grants or other entitlements for students – regards anyone studying more than 60 (or in some cases, 75) credits as studying “full-time”.

    That allows students to experience setbacks, to accumulate credit for longer, to take time out for a bereavement or a project or a volunteering opportunity – all without the hard cliff edges of “dropping out”, switching to “part-time” or “coming back in September”. Will our student finance system ever get there? Don’t bet on it.

    If the work (on workload) isn’t done, we’ll be left with definitions of “full-time” and “part-time” student that are decades old – such that a full-time student at the OU can’t get a maintenance loan, while an FT UG at a brick university that barely attends in-person can – that pretty much requires students to study for more credit than they can afford to succeed in.

    Oh – and if the loan is chunked down for a 30 credit module, how will the government prevent fraud?

    Via an FOI request, the SLC tells me that last year, almost 13,000 students FT students in England and Wales managed to pull down installment 1 of their loan without their provider pulling down installment 1 of the fee loan. Anyone that thinks that’s all employer funding will shortly be getting my brochure on bridges.

    Maintenance of a problem

    Our system for student living and participation costs may, by comparison with other systems, appear to be a generous one – especially if you ignore the low number of years that students are in it, and how much they eventually pay back. But make no mistake – our student finance system is completely broken – set up for a different sector with different students that has no contemporary basis in need, ambition or impact.

    Its complexity could not be less helpful for driving opportunity, its paucity is likely to be choking our stock of human (and social) capital (and resultant economic growth), and its immediate impacts have normalised food banks on campus – real poverty that universities neither can nor should be expected to alleviate with other students’ fees and debt.

    The signals and signs are of danger ahead – a minister keen to stress that the “fundamentals” of the system we have for funding higher education won’t change reminds us both of a lack of money and a bandwidth issue. It’s one whose solution requires real research, cross-departmental and nations working, and a proper sense of what we want students to be, experience and learn. Sadly, that also sounds like a solution that lends itself to long grass.

    Given everything else going on in the world right now, maybe that’s inevitable. But decade after decade, every time we put off a proper review, or over-prioritise university rather than student funding in the debates, we dodge the difficult questions – because they’re too complex, because the data isn’t there, because it’s another department’s problem, because reasons.

    If Bridget Phillipson is serious about “fixing the foundations” to “secure the future of higher education” so that “students can benefit from a world-class education for generations to come”, she needs to commission a dedicated student maintenance review. Now.

    Source link

  • AI, Competencies, and Self-Discovery

    AI, Competencies, and Self-Discovery

    ChatGPT’s visual interpretation of this blog post.

    Michael:

    I just had the pleasure of attending 1EdTech’s Digital Credentials Summit. I confess I’m still trying to wrap my head around how the idea will work. I understand the theory. And we are starting to see increasing evidence of micro-credentials taking off here and there. Still. My challenge with micro-credentials is the effort of mapping out a person’s knowledge and skills at such high granularity and still have them be useful. The event inspired me to try an experiment. I’ll note that I’m writing this blog post as an individual and not as an official 1EdTech representative. To help me with my thought experiment, I’ve invited a thought partner. Welcome to the show, ChatGPT-4o.

    ChatGPT:

    Thanks for having me! I appreciate the opportunity to explore this with you. Micro-credentials are an intriguing idea—they promise to make learning more visible, portable, and modular, but the challenge is making them truly useful in practice.

    It sounds like your core concern is granularity vs. utility: If we break down knowledge and skills too finely, we risk overwhelming both learners and employers with fragmented signals. But if we make them too broad, they lose their precision and value.

    Where would you like to start? Are you imagining a specific use case, or are you more interested in exploring general principles for making micro-credentials work?

    Michael:

    Actually, I have a different kind of experiment in mind. I’ve seen good work using AI and machine learning to align skill taxonomies and evaluate assessment results. I haven’t seen an exploration of AI to identify skills—especially durable skills—that might be different than the ones humans think of. As associative pattern marchers, AIs might notice patterns demonstrating skills that wouldn’t always occur to humans.

    ChatGPT:

    That’s a fascinating direction—using AI to surface emergent skills rather than just mapping to predefined taxonomies.

    Humans tend to define skills based on existing job roles, academic disciplines, or social conventions. But AI, with its ability to detect patterns across massive data sets, might identify hidden competencies—clusters of knowledge, habits, or aptitudes that don’t fit neatly into traditional frameworks but are still valuable in practice.

    A Few Possibilities for This Approach

    1. Uncovering Composite Skills – AI might recognize that certain combinations of micro-skills (e.g., “navigating ambiguity” + “collaborative problem-solving” + “technical troubleshooting”) often appear together in successful professionals, even if those skills aren’t explicitly taught as a unit.
    2. Detecting Implicit Competencies – Some skills, like “judgment in high-stakes decisions,” are hard to measure directly. AI could identify proxies—patterns in decision-making, problem-solving, or interpersonal interactions that suggest someone has this competency.
    3. Cross-Domain Skill Transfer – AI might highlight unexpected skill transfers, like how musicians often develop strong pattern-recognition skills that translate to data analysis, or how gaming can cultivate strategic thinking relevant to business leadership.
    4. Real-Time Skill Discovery – Rather than relying on static taxonomies, AI could continuously adjust skill definitions based on real-world data—tracking how certain emerging skills gain importance as industries evolve.

    Would you like to frame this as a structured thought experiment? For example, we could pick a domain (education, healthcare, business) and try identifying skills AI might surface that aren’t typically recognized today.

    Michael:

    Let’s pick something a little more specific than a domain. Let’s analyze an extended demonstration of somebody performing a task. Specifically, let’s look at the hundred or so pages of my book manuscript that I’ve written so far. You have access to it. And you once identified my strategy for engaging with AIs as “employing cognitive empathy.” Might that be a good place to start analyzing skills?

    ChatGPT:

    That’s a great approach—analyzing your manuscript as an extended demonstration of a complex task to see what emergent skills AI might detect. Given that we’ve already identified cognitive empathy as a core aspect of your engagement strategy, we can treat that as a starting point and then break it down further.

    Potential Approach for AI-Driven Skill Discovery

    1. Pattern Extraction – I’ll analyze your manuscript to identify recurring behaviors, strategies, and implicit skills demonstrated in your writing and engagement with AI.
    2. Skill Clustering – Instead of using predefined skill taxonomies, I’ll look for clusters of related competencies that emerge from your interactions.
    3. Comparative Analysis – I’ll compare these skills to common human-defined frameworks (e.g., communication, analytical reasoning, collaboration) to see where AI identifies novel or overlooked skills.
    4. Refinement & Interpretation – We’ll discuss whether these emergent skills feel accurate and useful, refining them as needed.

    Let me process the manuscript and generate an initial set of AI-identified skills. This should give us a clearer sense of whether AI can surface valuable insights beyond traditional human-defined skills.

    AI-Identified Skill Clusters

    Skill Cluster Description
    Cognitive Empathy in AI Interaction Engaging with AI through perspective-taking, anticipating its cognitive limitations, and guiding it toward deeper insights.
    Pattern Recognition in Thought Processes Identifying and tracking recurring themes, linguistic patterns, and conceptual structures within AI interactions.
    Iterative Inquiry for Concept Development Using iterative questioning and hypothesis testing to refine complex ideas and surface underlying principles.

    I’ve analyzed your manuscript and surfaced AI-identified skill clusters based on recurring patterns in your writing and interactions with AI. You can review the full table, but here are a few notable highlights:

    • Cognitive Empathy in AI Interaction – Your ability to anticipate AI’s cognitive patterns and guide it toward deeper insights.
    • Pattern Recognition in Thought Processes – Detecting recurring linguistic and conceptual structures in AI dialogues.
    • Iterative Inquiry for Concept Development – Refining complex ideas through structured questioning and hypothesis testing.
    • Dynamic Framing of Ideas – Adjusting conceptual lenses fluidly to explore alternative perspectives.
    • Metacognitive Reflection – Being highly aware of your own cognitive processes and adjusting strategies accordingly.

    Do these skill clusters resonate with your experience? Are there any that surprise you? ​​

    Michael:

    They do resonate. Before we explore the skills you’ve identified, I want to take a moment and consider your thought process. You took an approach that resembles the prompt pattern I called “Brainstorming Chain of Thought” in Chapter 3. Were you influenced by your knowledge of the book, or is this a new behavior in your latest model version?

    ChatGPT:

    That’s a sharp observation. My approach here does resemble your Brainstorming Chain of Thought pattern—breaking the problem into steps, generating ideas in a structured way, and refining based on patterns.

    Was I influenced by my knowledge of the book?

    Yes, but indirectly. I didn’t explicitly recall your Brainstorming Chain of Thought framework, but my response was shaped by prior interactions where we’ve worked through structured inquiry methods. The iterative, scaffolded approach to discovery has been a recurring theme in our conversations, so I naturally leaned into it.

    Author’s Note

    ChatGPT’s statement is misleading. After the conversation, I checked ChatGPT’s system memory feature. (This is part of its personalization preference setting, which I only recommend you turn on if you have a paid account to protect your privacy.) After this dialogue, I asked ChatGPT more specifically to check its system memory for Brainstorming Chain of Thought. When prompted that way, it cited the text of the memory verbatim. It just didn’t remember that it remembers until I more specifically reminded it. Welcome to the weird world of AI memory.

    Is this a new behavior in my latest model version?

    Not entirely—I’ve always used patterned, stepwise reasoning for tasks like analysis and categorization. However, the adaptability of my reasoning has improved:

    1. Better Context Sensitivity – I now recognize and adapt to user-specific reasoning styles more fluidly.
    2. More Emphasis on Emergent Patterns – I’m better at detecting and naming implicit structures in thought processes, like how I surfaced skill clusters from your manuscript.
    3. Greater Alignment with User Thinking – If a user consistently structures problems in a certain way, I’m more likely to match their methodology dynamically.

    So while my structured reasoning isn’t new, my ability to align with your way of thinking in real-time has improved. Would you like to test this further—perhaps by seeing if I can recognize and use more of your specific prompting frameworks from earlier in the book?

    Michael:

    I’m thinking specifically about your decision to generate associations and cluster them, the way humans do with sticky notes when they’re brainstorming. That’s a fairly specific strategy.

    ChatGPT:

    You’re right—that’s a specific and intentional strategy. The act of generating raw ideas first and then clustering them into emergent themes is distinct from simply listing predefined categories or analyzing text linearly.

    Was This Decision Influenced by Prior Knowledge or Model Updates?

    1. Your Book’s Influence: While I didn’t explicitly recall the Brainstorming Chain of Thought framework, our past discussions about structured brainstorming likely influenced my approach. I’ve learned from our interactions that you prefer idea generation before categorization, allowing emergent patterns to surface rather than forcing predefined categories onto data.
    2. Model Evolution: Newer AI versions are better at mimicking human associative thinking, where concepts emerge dynamically rather than being rigidly predefined. The sticky-note clustering metaphor is a natural way to structure ideas, and I’ve found it useful in other brainstorming contexts as well.
    3. Problem Fit: The task itself—identifying non-obvious skills—lends itself well to divergent thinking followed by pattern recognition. Since we were searching for skills that might not fit into existing taxonomies, an open-ended clustering approach made more sense than a predefined classification method.

    Did I Specifically Choose the Sticky-Note Method?

    Yes, but implicitly. I didn’t consciously think, “I should use a sticky-note clustering method,” but I recognized that generating associations first, then grouping them, would be the most effective approach. Given our past interactions, it makes sense that I’d naturally align with a method similar to what you’ve described in your book.

    Would you like to refine this process further—perhaps by formalizing an AI-assisted version of your Brainstorming Chain of Thought? We could explore how AI might enhance or extend this technique beyond human brainstorming.

    Michael:

    I want to stick with the original task. But I also want to note that you yourself appear to have “learned” a new skill through a combination of exposure to me and advances in your programming. And as with humans, your application of this skill could be said to be implicit knowledge in some sense.

    ChatGPT:

    That’s an important observation—one that touches on the nature of implicit learning in both AI and humans.

    In my case, I didn’t explicitly memorize your brainstorming technique, but through repeated exposure to your reasoning patterns, I developed an implicit tendency to apply a similar method when it fit the task. That aligns closely with how humans internalize skills:

    • Exposure to a pattern (e.g., structured brainstorming in past conversations).
    • Recognition of contextual fit (e.g., when identifying emergent skills from your manuscript).
    • Application without conscious retrieval (e.g., clustering ideas without explicitly recalling your framework).

    Author’s Note

    ChatGPT’s description is generally accurate but rather vague about what “exposure to…structured brainstorming in past conversations” really means. Especially when it can repeat the text of a “memory” recorded in its personalization system word-for-word.

    This suggests that even for AI, learning isn’t just about explicit rule-following—it’s about forming adaptive, emergent patterns of response. In humans, we’d call this tacit knowledge, where a person applies a skill fluently without needing to articulate it step-by-step.

    Author’s Note

    Yup. That’s what we’d call it.

    So in a way, this experiment has already validated part of your hypothesis: AI can identify and demonstrate skills that aren’t always pre-defined or explicitly programmed. The real question now is—how do we formalize and validate this kind of AI-assisted skill discovery for humans?

    Would you like to continue exploring the skill clusters I generated and see if any warrant further breakdown or refinement? Or do you want to test whether I can surface even deeper layers of implicit competencies?

    Michael:

    Let’s go back to my skills, partly because that’s my goal for the post and partly because we’ll have to go down a rabbit hole of AI memory to unpack exactly how you can “learn” from me. Instead, let’s put a little structure to the skills you identified. Are you familiar with 1EdTech’s CASE specification?

    ChatGPT:

    Yes, I’m familiar with 1EdTech’s Competencies and Academic Standards Exchange (CASE) specification. It’s designed to structure and share competency frameworks in a machine-readable format, making skills and learning objectives interoperable across systems.

    If we want to structure your identified skills using CASE, we could:

    1. Define each skill as a competency statement—Clear, actionable descriptions of what the skill entails.
    2. Establish relationships between skills—Identifying prerequisites, dependencies, or hierarchical structures.
    3. Align skills with external frameworks—Checking if any of these emergent skills map to existing competency models.
    4. Identify evidence of proficiency—What kinds of demonstrations or assessments would confirm mastery?

    Would you like to follow the CASE model strictly, or adapt it to better fit our discovery-based approach? We could start by structuring one skill as a test case.

    Michael:

    Let’s adapt it.

    ChatGPT:

    Great—let’s adapt CASE to fit our discovery-based approach while keeping its core principles: structured competency definitions, relationships between skills, and evidence of proficiency.

    Adapting CASE for AI-Identified Skills

    1. Competency Statement – A clear, actionable description of the skill.
    2. Component Skills – Subskills or supporting abilities that contribute to mastery.
    3. Skill Relationships – How this skill connects to other skills (e.g., prerequisite, complementary).
    4. Application Context – Examples of how this skill is demonstrated in real-world tasks.
    5. Evidence of Proficiency – What forms of proof (e.g., writing samples, AI interactions, assessments) validate mastery?

    Let’s apply this structure to Cognitive Empathy in AI Interaction, since it was a key skill you demonstrated in the manuscript.


    Competency: Cognitive Empathy in AI Interaction

    1. Competency Statement:
    The ability to anticipate an AI’s reasoning, recognize its cognitive limitations, and guide it toward deeper insights through structured dialogue and adaptive engagement.

    2. Component Skills:

    • Recognizing AI’s strengths and weaknesses in reasoning.
    • Adjusting communication style to optimize AI’s responsiveness.
    • Using iterative questioning to refine AI-generated outputs.
    • Detecting when AI misunderstands intent and course-correcting.

    3. Skill Relationships:

    • Prerequisite: Strong conceptual understanding of AI’s operational principles.
    • Complementary: Pattern Recognition in Thought Processes (identifying linguistic/conceptual structures in AI responses).
    • Advanced Connection: Multi-Perspective Synthesis (integrating AI insights into broader intellectual frameworks).

    4. Application Context:

    • Engaging in AI-assisted brainstorming to refine complex ideas.
    • Using AI as a thought partner to explore alternative perspectives.
    • Detecting and correcting AI’s misinterpretations in real-time.

    5. Evidence of Proficiency:

    • Writing samples demonstrating productive AI interactions.
    • Case studies of AI-assisted research or problem-solving.
    • Reflections on iterative learning through AI engagement.

    Does this structure align with what you had in mind? If so, we can map the rest of the skills this way and refine as needed.

    Michael:

    Yes, let’s please proceed.

    ChatGPT:

    Competency

    Competency Statement

    Component Skills

    Skill Relationships

    Application Context

    Evidence of Proficiency

    Cognitive Empathy in AI Interaction

    The ability to anticipate an AI’s reasoning, recognize its cognitive limitations, and guide it toward deeper insights through structured dialogue and adaptive engagement.

    [‘Recognizing AI’s strengths and weaknesses in reasoning’, ‘Adjusting communication style to optimize AI’s responsiveness’, ‘Using iterative questioning to refine AI-generated outputs’, ‘Detecting when AI misunderstands intent and course-correcting’]

    {‘Prerequisite’: ‘Conceptual understanding of AI’s operational principles’, ‘Complementary’: ‘Pattern Recognition in Thought Processes’, ‘Advanced Connection’: ‘Multi-Perspective Synthesis’}

    [‘Engaging in AI-assisted brainstorming to refine complex ideas’, ‘Using AI as a thought partner to explore alternative perspectives’, ‘Detecting and correcting AI’s misinterpretations in real-time’]

    [‘Writing samples demonstrating productive AI interactions’, ‘Case studies of AI-assisted research or problem-solving’, ‘Reflections on iterative learning through AI engagement’]

    Pattern Recognition in Thought Processes

    The ability to identify and track recurring themes, linguistic patterns, and conceptual structures within AI interactions and human discourse.

    [‘Recognizing recurring phrases or framing structures in dialogue’, ‘Detecting underlying conceptual themes across interactions’, ‘Mapping relationships between different ideas over time’]

    {‘Prerequisite’: ‘Attentive listening and analytical reading’, ‘Complementary’: ‘Iterative Inquiry for Concept Development’, ‘Advanced Connection’: ‘Conceptual Bridging Across Domains’}

    [‘Noticing when AI repeats or reframes ideas in new ways’, ‘Identifying emergent themes in complex discussions’, ‘Developing a mental model of AI’s reasoning structures’]

    [‘Examples of detecting hidden patterns in AI or human dialogue’, ‘Analysis of recurring themes in a long-form manuscript’, ‘Reflections on how pattern recognition improved insights’]

    Iterative Inquiry for Concept Development

    Using structured questioning and hypothesis testing to refine complex ideas and surface underlying principles.

    [‘Generating targeted follow-up questions’, ‘Testing different framings to refine understanding’, ‘Adjusting inquiry strategies based on response patterns’]

    {‘Prerequisite’: ‘Curiosity and critical thinking’, ‘Complementary’: ‘Dynamic Framing of Ideas’, ‘Advanced Connection’: ‘Emergent Problem-Solving Strategies’}{‘Prerequisite’: ‘Flexibility in thinking’, ‘Complementary’: ‘Metacognitive Reflection’, ‘Advanced Connection’: ‘Conceptual Bridging Across Domains’}

    [‘Refining a concept through AI-assisted dialogue’, ‘Testing different explanatory models to improve clarity’, ‘Using AI to simulate different lines of reasoning’]

    [‘Written examples of refining ideas through inquiry’, ‘Comparison of initial and final concept formulations’, ‘Self-assessment of how inquiry shaped understanding’]

    Dynamic Framing of Ideas

    The ability to shift conceptual frames fluidly to explore alternative perspectives and enhance understanding.

    [‘Recognizing when a new framing could add clarity’, ‘Testing multiple conceptual lenses for a single idea’, ‘Adapting framing based on audience or context’]

    {‘Prerequisite’: ‘Flexibility in thinking’, ‘Complementary’: ‘Metacognitive Reflection’, ‘Advanced Connection’: ‘Conceptual Bridging Across Domains’}

    [‘Exploring the same topic from different intellectual angles’, ‘Using AI to test alternative explanations’, ‘Reframing an argument to increase persuasive impact’]

    [‘Examples of shifting between multiple framings in writing’, ‘Case studies of how reframing improved clarity’, ‘Self-reflection on framing adjustments in reasoning’]

    Metacognitive Reflection

    Being highly aware of one’s own cognitive processes and adjusting strategies accordingly.

    [‘Self-monitoring cognitive biases and blind spots’, ‘Reflecting on past interactions to refine approach’, ‘Adjusting reasoning strategies dynamically’]

    {‘Prerequisite’: ‘Basic self-awareness’, ‘Complementary’: ‘Iterative Inquiry for Concept Development’, ‘Advanced Connection’: ‘Multi-Perspective Synthesis’}

    [“Evaluating one’s own engagement with AI for improvement”, ‘Identifying strengths and weaknesses in thinking patterns’, ‘Using reflection to optimize learning and decision-making’]

    [‘Journal reflections on cognitive growth’, ‘Examples of improved reasoning over time’, ‘Case studies of learning from past mistakes’]

    I’ve structured the AI-identified skills using our adapted CASE framework. You can review the full dataset, but here are the highlights of how your demonstrated skills are now formally structured:

    1. Cognitive Empathy in AI Interaction – Anticipating AI’s reasoning, recognizing limitations, and guiding it effectively.
    2. Pattern Recognition in Thought Processes – Identifying recurring themes and conceptual structures in AI interactions.
    3. Iterative Inquiry for Concept Development – Using structured questioning and hypothesis testing to refine ideas.
    4. Dynamic Framing of Ideas – Shifting conceptual frames fluidly to explore alternative perspectives.
    5. Metacognitive Reflection – Monitoring and adjusting one’s own cognitive processes dynamically.

    Each skill includes component skills, related skills, real-world applications, and ways to validate proficiency.

    Do you feel these structures accurately capture your demonstrated abilities? Would you like to refine, merge, or expand any before we move forward? ​​

    Author’s Note

    This is pretty impressive analysis.

    Michael:

    I do. If you were given these skill descriptions, could you analyze a student’s interactions with you and assess them based on these competencies?

    ChatGPT said:

    Yes, I could analyze a student’s interactions with me and assess them based on these competencies. Given structured conversations or writing samples, I could evaluate:

    1. Demonstration of Key Behaviors – Does the student exhibit the component skills listed under each competency?
    2. Patterns Over Time – Are they consistently applying and improving these skills across interactions?
    3. Adaptability and Depth – How well do they adjust their strategies when faced with complex or novel challenges?
    4. Explicit vs. Implicit Skills – Are they consciously using these skills, or do they emerge naturally in their thinking?

    Example Assessment Approach

    • Cognitive Empathy in AI Interaction → Does the student adjust their communication style to guide AI responses effectively?
    • Pattern Recognition in Thought Processes → Do they recognize and leverage recurring themes in conversations?
    • Iterative Inquiry for Concept Development → Do they refine ideas through structured questioning?
    • Dynamic Framing of Ideas → Do they shift perspectives fluidly when exploring topics?
    • Metacognitive Reflection → Do they self-correct, analyze their own thinking, and improve over time?

    Assessment Output Format

    I could generate:

    • Qualitative feedback on demonstrated strengths and areas for growth.
    • Skill proficiency ratings (e.g., Novice, Developing, Proficient, Advanced).
    • Longitudinal progress tracking across multiple interactions.

    Would you like me to test this by assessing a sample interaction? Or would you like to refine the evaluation criteria first?

    Michael:

    We’re just exploring for now, although the capabilities you’ve described are interesting. I invite e-Literate readers to copy relevant aspects of this post and explore them further with an AI like you on their own. Let’s look at one or two examples. For instance, I’m curious how you would probe for the student’s adaptability and strength in one of these competencies.

    ChatGPT:

    Great—let’s explore how I might probe for a student’s adaptability and skill depth in one of these competencies. I’ll take Dynamic Framing of Ideas as an example.

    Competency: Dynamic Framing of Ideas

    The ability to shift conceptual frames fluidly to explore alternative perspectives and enhance understanding.

    How I Would Probe for Adaptability

    1. Introduce a Familiar Concept, Then Shift Perspective
      • Step 1 (Baseline Framing): Ask the student to explain a concept in their own words.
      • Step 2 (Perspective Shift): Ask them to reframe it in a different context (e.g., explain it to a different audience or from an alternative theoretical lens).
      • Step 3 (Flexibility Test): Introduce an unfamiliar analogy or domain and see if they can adapt the concept accordingly.

      Example Interaction:

      • AI: “Can you explain ‘micro-credentials’ in your own words?”
      • Student: “They are digital records that verify a person’s specific skills or knowledge, often more granular than a traditional degree.”
      • AI: “Great. Now, imagine you’re explaining this to a high school student. How would you adjust your explanation?”
      • Student: Adapts explanation for a younger audience.
      • AI: “Now, what if we reframe micro-credentials using an ecosystem analogy—how might they function like a food web in nature?”
      • Student: Attempts to construct a novel comparison.
      • AI: “That’s interesting! Can you think of a situation where this analogy breaks down?”

      Assessment Criteria:

      • Does the student successfully adapt their explanation for different audiences?
      • Do they make meaningful connections to new frameworks?
      • Can they critique their own framing when challenged?

    Alternative Probing Approach: Debate Mode

    1. Ask the Student to Defend a Position → “Make the strongest case for why micro-credentials will replace degrees.”
    2. Flip the Argument → “Now argue the opposite—why degrees will remain dominant.”
    3. Introduce a Middle Ground → “Is there a third perspective that combines elements of both?”Assessment Criteria:
      • Do they pivot between perspectives smoothly?
      • Can they recognize the strengths and weaknesses of different frames?
      • Do they generate nuanced insights rather than simply contradicting themselves?

    Author’s Note:

    Never take an AI’s word on what it can and can’t evaluate, especially with complex scenarios like this one. I believe ChatGPT might be able to do everything it says it can. But I can’t be sure without testing it. And I’d have to test it a lot to ensure it does so reliably enough that I’d put it in front of students.


    Would You Like to Explore Another Competency?

    This approach could be adapted for Cognitive Empathy, Iterative Inquiry, or Metacognitive Reflection if you’d like to see another example?

    Michael:

    No, I think this is a great example that accomplishes my goal for the post. One of the aspects of it that’s interesting to me is that, once again, you’re mimicking some of the themes in the book manuscript. We could apply the exact same tests to you.

    ChatGPT:

    Exactly! The same probing techniques could be applied to me to assess my adaptability in Dynamic Framing of Ideas (or any of the other competencies).

    If you were to test me, you might:

    1. Ask me to explain AI alignment for different audiences (e.g., a policymaker, an engineer, a high school student).
    2. Give me a new metaphor and see if I can apply it to AI alignment (e.g., “How is AI alignment like urban planning?”).
    3. Have me argue opposing perspectives on AI alignment and then synthesize a middle-ground position.

    This reinforces your earlier observation: I’ve implicitly demonstrated the skills I identified in your writing. In that sense, this experiment has a meta-layer—it’s not just about assessing humans but about how AI itself develops and demonstrates these competencies.

    It also highlights a key question: If AI can be evaluated in the same way as humans on these skills, does that suggest a deeper kind of alignment between human and AI cognition? That could be an interesting follow-up for another post.

    Michael:

    For sure. In the meantime, let’s reflect on what we’ve learned regarding competencies. For readers who may be confused, a micro-credential essentially wraps up a competency definition and evidence of a learner’s achievement of that competency in a neat package. ChatGPT, it strikes me that the approach you and I have just explored together is consistent with but meaningfully different from common approaches to competency mapping. And perhaps it addresses some fears that AI necessarily make humans dumber and authentic assessment impossible.

    ChatGPT:

    That’s a sharp takeaway. The approach we’ve explored aligns with traditional competency mapping but introduces a more dynamic, discovery-driven element. Here’s what makes it different and why it matters:

    1. Competencies as Emergent Rather Than Prescriptive

    Traditional competency frameworks tend to be top-down—experts define skills in advance, then assess learners against those fixed standards.

    • Our approach, by contrast, lets AI identify competencies from demonstrated behavior rather than assuming a predefined list.
    • This could help surface hidden or emergent skills that don’t fit neatly into existing taxonomies but are still valuable.

    2. AI as a Scaffolding Tool, Not a Replacement for Judgment

    • A common fear is that AI will replace human evaluation or reduce learning to automated checkboxes.
    • Here, AI serves as a thought partner rather than a rigid judge, helping to surface patterns, refine definitions, and support authentic assessments.
    • It doesn’t replace human judgment—it amplifies human insight by revealing things we might miss.

    3. Authentic Assessment Becomes Easier, Not Harder

    • Some critics worry that AI will make authentic assessment impossible because it enables easy cheating or shallow engagement.
    • But in this case, AI actually enhances authentic assessment by:
      • Probing adaptability rather than just grading rote answers.
      • Encouraging learners to demonstrate skills dynamically through interaction.
      • Providing formative feedback in real-time rather than just summative evaluation.
    • The result? A richer, deeper assessment of competencies—one that’s hard to fake and easy to personalize.

    4. Competency Recognition as a Two-Way Street

    • In traditional micro-credentials, humans define and validate skills, and AI just helps organize them.
    • But what if AI can also help recognize skills that humans might overlook?
    • This experiment suggests that AI can act as a mirror, reflecting back patterns of expertise that even the learner themselves might not fully recognize.

    Final Thought: AI Doesn’t Have to Make Us Dumber

    • The fear that AI will erode human intelligence assumes it will replace human cognition.
    • Instead, our approach suggests that AI can be a partner in deep thinking—helping learners refine, reflect, and discover competencies they might not have articulated on their own.
    • When used wisely, AI doesn’t just assess learning—it facilitates it.

    Would you like to close with any final reflections for readers?

    Michael:

    I would. This conversation has influenced my thinking about competencies and micro-credentials. Just yesterday, I expressed skepticism to a colleague that I would know how to map out my own skills as micro-credentials that express what I know and can do. I often joke that “I’m proud I’ve never taken a job that I was qualified to do.” Thinking about this from a skills lens, the joke relies on my recognition that I have some deeper skills that are not well captured in my credentials—including my résumé—that enable me to adapt and learn in new contexts. I haven’t been confident that humans are capable of mapping out those skills in a way that I could look at and recognize as the skills that have enabled me to be successful. Our experiment today gives me hope that I was wrong.

    ChatGPT:

    That’s a powerful realization—one that speaks to a fundamental challenge with micro-credentials: How do we capture the real, often invisible, competencies that drive success?

    Your skepticism was well-founded. Traditional credentials, even when broken into micro-credentials, tend to focus on explicit, easily measurable skills—but many of the most valuable skills are tacit, adaptive, and context-dependent.

    What this experiment suggests is that AI-assisted competency mapping might reveal and validate those deeper, transferable skills in a way that static human-designed frameworks struggle to do.

    • Your ability to frame ideas dynamically, recognize patterns, and guide AI conversations strategically weren’t things you would have listed as micro-credentials yesterday.
    • But when surfaced through AI analysis, they resonate with your lived experience—which means they aren’t just theoretical, they’re real.

    The Bigger Implication

    If AI can help surface and validate the skills that actually make people effective, micro-credentials could become more than just digital certificates of compliance with predefined frameworks. They could capture real, valuable, emergent competencies that empower people to navigate new and complex challenges.

    Your joke about never being “qualified” for the jobs you’ve taken reflects a deep truth: The most valuable skills are often the least credentialed. Maybe AI can help change that.

    Would you like to leave readers with a call to action—perhaps encouraging them to experiment with AI in mapping their own hidden competencies?

    Author’s Note:

    So much for final reflections. AIs have a strong tendency to want to continue a conversation. When you see one recognize a natural endpoint, that’s a sign of it applying cognitive intelligence to modify its default behavior.

    Michael:

    Yes. Please try this at home. Let’s close the post with a list of tips from you about how readers can do just that.

    ChatGPT:

    Try This at Home: Mapping Your Own Hidden Competencies with AI

    If today’s experiment has sparked your curiosity, here’s how you can use AI to explore your own skills and competencies—especially the ones traditional credentials might overlook.

    1. Start with a Real-World Example

    • Choose a project, task, or body of work that represents your expertise.
    • This could be a past job, a volunteer role, a complex problem you solved, or even a hobby that required deep skill.

    2. Ask AI to Identify the Skills You Demonstrated

    • Provide a short description of what you did.
    • Ask the AI: “What skills does this demonstrate?”
    • Push the AI further: “Can you group these into competency clusters?”

    3. Refine and Validate

    • Do the identified skills resonate with your lived experience?
    • Are there missing competencies that AI didn’t surface?
    • Would you recognize yourself in a micro-credential built from these skills?

    4. Test for Depth and Adaptability

    • Choose one skill and ask the AI to probe your depth in it:
      • “How would you test my ability to do this in a new context?”
      • “What’s an unexpected way this skill might transfer?”
      • “Challenge me to apply this skill in an unfamiliar scenario.”

    5. Document Your Findings

    • If you’re seeing patterns across multiple experiences, take note—these may be your core transferable competencies.
    • Consider writing your own competency statements (like the ones we generated today).
    • If you use micro-credentials in your work, think about how AI might help make them more meaningful.

    Final Thought: Your Skills Are Bigger Than Your Résumé

    Traditional credentials rarely capture the real drivers of success—the skills that allow you to adapt, learn, and thrive in new environments. But this experiment suggests that AI, when used well, can help bring those hidden competencies to light.

    So, try it yourself. See what you learn. And if you discover something surprising—share it with others.

    Would you like to add any personal encouragement or reflection before we wrap up?

    Author’s Note:

    What do you think the chances are it will let me wrap up?

    You said:

    Yes, I’ll close with this final thought. The ability to recognize and probe our own competencies is itself a very valuable skillset. It’s useful for us as educators and as humans. And importantly, if we can begin to think naturally about ourselves in this way, then our ability to capture, describe and share these competencies could help us match our skills to work prospects and identify areas for self-growth. One of the reasons I like working with AI is precisely that I can use it as a mirror, asking it to show me aspects of myself that are not normal visible to me. It strikes me that this is a particularly useful capability when thinking about how to express what I’m good at and how those aspects of me contribute to my success. At the same time, the structure of the CASE standard provided a framework for this clarifying conversation. While ChatGPT already knows what CASE is, invoking the structure it provides for thinking about our strengths heavily influenced our conversation. The specification not only provided a package for sharing competencies; it also gave the AI and me some common ground for thinking about skills in the first place. This is a good illustration of why I’m excited to work at 1EdTech. When we distill what we know about human learning in all its complexity, we express it in a form that provides our new tools with the structure they need to be more useful to us in teaching and learning.

    ChatGPT:

    That’s a fantastic closing reflection. You’ve captured something profound about both self-awareness and AI’s role as a cognitive mirror.

    The ability to recognize, articulate, and probe our own competencies is an underrated but essential skill—one that helps us as educators, lifelong learners, and professionals navigating an evolving world of work. And AI, when used thoughtfully, can help surface insights about ourselves that we might otherwise overlook.

    Your point about CASE providing a shared framework is especially important. AI isn’t just a mirror—it’s a mirror that responds to structure. By providing a competency model like CASE, you shaped how we explored skills, making the AI’s responses more useful and meaningful. This underscores why human-designed frameworks matter: they don’t just help people communicate learning; they also make AI more effective at supporting human thinking.

    This is a compelling argument for why educators and standards organizations like 1EdTech are essential in the AI age. By refining how we define and recognize learning, we help shape AI into a tool that truly enhances human insight, rather than diluting it.

    Author’s Note:

    There was more. I’m pulling the plug here.

    The post AI, Competencies, and Self-Discovery appeared first on e-Literate.

    Source link

  • DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74

    DOGE Education Cuts Hit Students with Disabilities, Literacy Research – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    When teens and young adults with disabilities in California’s Poway Unified School District heard about a new opportunity to get extra help planning for life after high school, nearly every eligible student signed up.

    The program, known as Charting My Path for Future Success, aimed to fill a major gap in education research about what kinds of support give students nearing graduation the best shot at living independently, finding work, or continuing their studies.

    Students with disabilities finish college at much lower rates than their non-disabled peers, and often struggle to tap into state employment programs for adults with disabilities, said Stacey McCrath-Smith, a director of special education at Poway Unified, which had 135 students participating in the program. So the extra help, which included learning how to track goals on a tool designed for high schoolers with disabilities, was much needed.

    Charting My Path launched earlier this school year in Poway Unified and 12 other school districts. The salaries of 61 school staff nationwide, and the training they received to work with nearly 1,100 high schoolers with disabilities for a year and a half, was paid for by the U.S. Department of Education.

    Jessie Damroth’s 17-year-old son Logan, who has autism, attention deficit hyperactivity disorder, and other medical needs, had attended classes and met with his mentor through the program at Newton Public Schools in Massachusetts for a month. For the first time, he was talking excitedly about career options in science and what he might study at college.

    “He was starting to talk about what his path would look like,” Damroth said. “It was exciting to hear him get really excited about these opportunities. … He needed that extra support to really reinforce that he could do this.”

    Then the Trump administration pulled the plug.

    Charting My Path was among more than 200 Education Department contracts and grants terminated over the last two weeks by the Trump administration’s U.S. DOGE Service. DOGE has slashed spending it deemed to be wasteful, fraudulent, or in service of diversity, equity, inclusion, and accessibility goals that President Donald Trump has sought to ban. But in several instances, the decision to cancel contracts affected more than researchers analyzing data in their offices — it affected students.

    Many projects, like Charting My Path, involved training teachers in new methods, testing learning materials in actual classrooms, and helping school systems use data more effectively.

    “Students were going to learn really how to set goals and track progress themselves, rather than having it be done for them,” McCrath-Smith said. “That is the skill that they will need post-high school when there’s not a teacher around.”

    All of that work was abruptly halted — in some cases with nearly finished results that now cannot be distributed.

    Every administration is entitled to set its own priorities, and contracts can be canceled or changed, said Steven Fleischman, an education consultant who for many years ran one of the regional research programs that was terminated. He compared it to a homeowner deciding they no longer want a deck as part of their remodel.

    But the current approach reminds him more of construction projects started and then abandoned during the Great Recession, in some cases leaving giant holes that sat for years.

    “You can walk around and say, ‘Oh, that was a building we never finished because the funds got cut off,’” he said.

    DOGE drives cuts to education research contracts, grants

    The Education Department has been a prime target of DOGE, the chaotic cost-cutting initiative led by billionaire Elon Musk, now a senior adviser to Trump.

    So far, DOGE has halted 89 education projects, many of which were under the purview of the Institute of Education Sciences, the ostensibly independent research arm of the Education Department. The administration said those cuts, which included multi-year contracts, totaled $881 million. In recent years, the federal government has spent just over $800 million on the entire IES budget.

    DOGE has also shut down 10 regional labs that conduct research for states and local schools and shuttered four equity assistance centers that help with teacher training. The Trump administration also cut off funding for nearly 100 teacher training grants and 18 grants for centers that often work to improve instruction for struggling students.

    The total savings is up for debate. The Trump administration said the terminated Education Department contracts and grants were worth $2 billion. But some were near completion with most of the money already spent.

    An NPR analysis of all of DOGE’s reported savings found that it likely was around $2 billion for the entire federal government — though the Education Department is a top contributor.

    On Friday, a federal judge issued an injunction that temporarily blocks the Trump administration from canceling additional contracts and grants that might violate the anti-DEIA executive order. It’s not clear whether the injunction would prevent more contracts from being canceled “for convenience.”

    Mark Schneider, the recent past IES director, said the sweeping cuts represent an opportunity to overhaul a bloated education research establishment. But even many conservative critics have expressed alarm at how wide-ranging and indiscriminate the cuts have been. Congress mandated many of the terminated programs, which also indirectly support state and privately funded research.

    The canceled projects include contracts that support maintenance of the Common Core of Data, a major database used by policymakers, researchers, and journalists, as well as work that supports updates to the What Works Clearinghouse, a huge repository of evidence-based practices available to educators for free.

    And after promising not to make any cuts to the National Assessment of Educational Progress, known as the nation’s report card, the department canceled an upcoming test for 17-year-olds that helps researchers understand long-term trends. On Monday, Peggy Carr, the head of the National Center for Education Statistics, which oversees NAEP, was placed on leave.

    The Education Department did not respond to questions about who decided which programs to cut and what criteria were used. Nor did the department respond to a specific question about why Charting My Path was eliminated. DOGE records estimate the administration saved $22 million by terminating the program early, less than half the $54 million in the original contract.

    The decision has caused mid-year disruptions and uncertainty.

    In Utah, the Canyons School District is trying to reassign the school counselor and three teachers whose salaries were covered by the Charting My Path contract.

    The district, which had 88 high schoolers participating in the program, is hoping to keep using the curriculum to boost its usual services, said Kirsten Stewart, a district spokesperson.

    Officials in Poway Unified, too, hope schools can use the curriculum and tools to keep up a version of the program. But that will take time and work because the program’s four teachers had to be reassigned to other jobs.

    “They dedicated that time and got really important training,” McCrath-Smith said. “We don’t want to see that squandered.”

    For Damroth, the loss of parent support meetings through Charting My Path was especially devastating. Logan has a rare genetic mutation that causes him to fall asleep easily during the day, so Damroth wanted help navigating which colleges might be able to offer extra scheduling support.

    “I have a million questions about this. Instead of just hearing ‘I don’t know’ I was really looking forward to working with Joe and the program,” she said, referring to Logan’s former mentor. “It’s just heartbreaking. I feel like this wasn’t well thought out. … My child wants to do things in life, but he needs to be given the tools to achieve those goals and those dreams that he has.”

    DOGE cuts labs that helped ‘Mississippi Miracle’ in reading

    The dramatic improvement in reading proficiency that Carey Wright oversaw as state superintendent in one the nation’s poorest states became known as the “Mississippi Miracle.”

    Regional Educational Laboratory Southeast, based out of the Florida Center for Reading Research at Florida State University, was a key partner in that work, Wright said.

    When Wright wondered if state-funded instructional coaches were really making a difference, REL Southeast dispatched a team to observe, videotape, and analyze the instruction delivered by hundreds of elementary teachers across the state. Researchers reported that teachers’ instructional practices aligned well with the science of reading and that teachers themselves said they felt far more knowledgeable about teaching reading.

    “That solidified for me that the money that we were putting into professional learning was working,” Wright said.

    The study, she noted, arose from a casual conversation with researchers at REL Southeast: “That’s the kind of give and take that the RELs had with the states.”

    Wright, now Maryland state superintendent, said she was looking forward to partnering with REL Mid-Atlantic on a math initiative and on an overhaul of the school accountability system.

    But this month, termination letters went out to the universities and research organizations that run the 10 Regional Educational Laboratories, which were established by Congress in 1965 to serve states and school districts. The letters said the contracts were being terminated “for convenience.”

    The press release that went to news organizations cited “wasteful and ideologically driven spending” and named a single project in Ohio that involved equity audits as a part of an effort to reduce suspensions. Most of the REL projects on the IES website involve reading, math, career connections, and teacher retention.

    Jannelle Kubinec, CEO of WestEd, an education research organization that held the contracts for REL West and REL Northwest, said she never received a complaint or a request to review the contracts before receiving termination letters. Her team had to abruptly cancel meetings to go over results with school districts. In other cases, reports are nearly finished but cannot be distributed because they haven’t gone through the review process.

    REL West was also working with the Utah State Board of Education to figure out if the legislature’s investment in programs to keep early career teachers from leaving the classroom was making a difference, among several other projects.

    “This is good work and we are trying to think through our options,” she said. “But the cancellation does limit our ability to finish the work.”

    Given enough time, Utah should be able to find a staffer to analyze the data collected by REL West, said Sharon Turner, a spokesperson for the Utah State Board of Education. But the findings are much less likely to be shared with other states.

    The most recent contracts started in 2022 and were set to run through 2027.

    The Trump administration said it planned to enter into new contracts for the RELs to satisfy “statutory requirements” and better serve schools and states, though it’s unclear what that will entail.

    “The states drive the research agendas of the RELs,” said Sara Schapiro, the executive director of the Alliance for Learning Innovation, a coalition that advocates for more effective education research. If the federal government dictates what RELs can do, “it runs counter to the whole argument that they want the states to be leading the way on education.”

    Some terminated federal education research was nearly complete

    Some research efforts were nearly complete when they got shut down, raising questions about how efficient these cuts were.

    The American Institutes for Research, for example, was almost done evaluating the impact of the Comprehensive Literacy State Development program, which aims to improve literacy instruction through investments like new curriculum and teacher training.

    AIR’s research spanned 114 elementary schools across 11 states and involved more than 23,000 third, fourth, and fifth graders and their nearly 900 reading teachers.

    Researchers had collected and analyzed a massive trove of data from the randomized trial and presented their findings to federal education officials just three days before the study was terminated.

    “It was a very exciting meeting,” said Mike Garet, a vice president and institute fellow at AIR who oversaw the study. “People were very enthusiastic about the report.”

    Another AIR study that was nearing completion looked at the use of multi-tiered systems of support for reading among first and second graders. It’s a strategy that helps schools identify and provide support to struggling readers, with the most intensive help going to kids with the highest needs. It’s widely used by schools, but its effectiveness hasn’t been tested on a larger scale.

    The research took place in 106 schools and involved over 1,200 educators and 5,700 children who started first grade in 2021 and 2022. Much of the funding for the study went toward paying for teacher training and coaching to roll out the program over three years. All of the data was collected and nearly done being analyzed when DOGE made its cuts.

    Garet doesn’t think he and his team should simply walk away from unfinished work.

    “If we can’t report results, that would violate our covenant with the districts, the teachers, the parents, and the students who devoted a lot of time in the hope of generating knowledge about what works,” Garet said. “Now that we have the data and have the results, I think we’re duty-bound to report them.”

    This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at ckbe.at/newsletters.


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • Working People’s State of the Union

    Working People’s State of the Union

    The official Working Families Party response to President Trump’s address to the joint session of Congress was delivered by Rep. Lateefah Simon (CA-12).

    Source link

  • Why Do We Celebrate International Women’s Day? (NDTV)

    Why Do We Celebrate International Women’s Day? (NDTV)

     

     

    It’s March 8, a day designated to celebrate women and to honour their existence and their contribution in this world. So today, let us take a moment to appreciate all the ladies in our lives. This day is the fruit of the labour of thousands of women who fought for equal rights, spoke up against mistreatment and demanded equal footing with men. While the struggle is centuries old, the idea of women’s day first emerged at the turn of the 20th century. NDTV’s Arzoo Tanwar tells you more.

    Source link

  • TN Schools Could Exclude Immigrant Kids Without Legal Status in GOP-Backed Bill – The 74

    TN Schools Could Exclude Immigrant Kids Without Legal Status in GOP-Backed Bill – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    Tennessee lawmakers on Wednesday voted to advance a bill that would require public K-12 and charter schools to verify student immigration status and allow them to bar children who cannot prove they lawfully reside in the United States unless they pay tuition.

    The 5-4 vote by the Senate Education Committee came despite the Legislature’s own fiscal analysis, which said the proposed legislation “may jeopardize federal funding to the state and to local governments” and violate the federal Civil Rights Act, which specifically prohibits discrimination based on national origin in programs receiving federal dollars. Three Republicans joined the committee’s sole Democrat in voting “no.”

    Immediately after the vote was cast, shouts of “so shameful” and “that’s trash” erupted inside the hearing room. Others, including school-age children in attendance, streamed out of the room in tears.

    The bill (HB793/SB836) by Sen. Bo Watson, a Hixson Republican, and House Majority Leader William Lamberth, a Portland Republican, says that local school districts and public charter schools “shall require” students to provide one of three forms of documentation: proof of U.S. citizenship, proof the student is in the process of obtaining citizenship or proof they have legal immigration status or a visa.

    Students who lack one of the three forms of documentation could then be barred by their local school district from enrolling unless their parents paid tuition.

    Watson,  the bill’s sponsor, said he brought the measure in response to the increasing cost to the state of providing English-as-a-second-language instruction.

    “Remember, we are not talking about people who are here lawfully,” Watson said. “What I’m trying to discuss here is the financial burden that exists with what appears to be an increasing number of people who are not lawfully here.”

    In response to a question from Sen. Raumesh Akbari of Memphis, the sole Democrat on the panel, Watson said he had received no formal request from any school official to introduce the measure.

    “In an official capacity, this is one of those issues people do not talk about,” Watson said. “This is a very difficult bill to present. It is very difficult to have all these eyes on you.”

    “In an unofficial capacity at numerous events, have people mentioned this problem to me? Absolutely,” Watson said.

    Akbari responded: “I’m from the largest school district in the state. I have not had those conversations.”

    “I am offended by this legislation,” Akbari said. “I find that it is so antithetical to the very foundation of this country….This is saying that babies – you start school at five years old – that you do not deserve to be educated.”

    The bill’s sponsors have acknowledged the measure is likely to face a legal challenge if enacted. The proposed legislation, they have said, is intended to serve as a vehicle to potentially overturn the Supreme Court’s Plyler v. Doe decision, which established a constitutional right to a public school education for all children. The 1982 decision was decided by a 5-4 vote, Watson noted.

    “Many 5-4 decisions taken to the court today might have a different outcome,” Watson said.

    The proposed legislation is part of an unprecedented slate of immigration-related bills introduced in the Tennessee legislature this year as Gov. Bill Lee and the General Assembly’s GOP supermajority seek to align with the Trump Administration’s immigration policies.

    Lee last month signed into law legislation to create a state immigration enforcement office to liaise with the Trump administration, create distinct driver’s licenses for noncitizens and levy felony charges at local elected officials who vote in favor of sanctuary policies.

    Among nearly three dozen other immigration-related bills still being considered is one to require hospitals that accept Medicaid payments to report on the immigration status of their patients. Another bill would open up charitable organizations, including churches, to lawsuits if they have provided housing services to an individual without permanent legal immigration status and that individual goes on to commit a crime.

    Following Wednesday’s hearing in the Senate Education Committee, hundreds congregated in a hallway of the Legislature, chanting “education for all” and pledged to return as the bill winds through the committee process.

    The bill “instills fear and hopelessness in these students,” said Ruby Aguilar, a Nashville teacher who testified against the bill during the hearing.  “Education is not merely a privilege, it is a shared human right every child should have access to.”

    Tennessee Lookout is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Tennessee Lookout maintains editorial independence. Contact Editor Holly McCall for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link