Tag: Solving

  • Solving the continuation challenge with engagement analytics

    Solving the continuation challenge with engagement analytics

    • By Rachel Maxwell, Principal Advisor at Kortext.

    Since the adjustments to the Office for Students’ (OfS) Condition B3: Student outcomes, published continuation rates have dropped from 91.1% in 2022 to 89.5% in 2024 for full-time students on their first degree.

    This drop is most evident for students in four key areas: (1) foundation year courses; (2) sub-contracted and franchised courses; (3) those with lower or unknown qualifications on entry; and (4) those studying particular subjects including Business and Management, and Computing.

    Universities utilising student engagement analytics are bucking this downward trend. Yet, surprisingly, engagement analytics are not mentioned in either the evaluation report or the accompanying Theory of Change document.

    Ignoring the impact of analytics is a mistake: universities with real-time actionable information on student engagement can effectively target those areas where risks to continuation are evident – whether at the programme or cohort level, or defined by protected characteristics or risks to equality of opportunity.

    The [engagement analytics] data you see today is next year’s continuation data.

    Dr Caroline Reid, former Associate Dean at the University of Bedfordshire

    A more complete view of student learning

    The digital footprints generated by students offer deep insights into their learning behaviours, enabling early interventions that maximise the opportunity for students to access the right support before any issues escalate. While data can never explain why a student is disengaging from their learning, it provides the starting point for a supportive outreach conversation. What happens thereafter would depend on what the conversation revealed – what kind of intervention would be most appropriate for the student? Examples include academic skills development, health and wellbeing support or financial help. The precise nature of the intervention would depend on the ecosystem of (typically) the professional services success and support expertise available within each institution.

    Analysing engagement activity at the cohort level, alongside the consequent demand on student services teams, further enables universities to design cohort or institution-wide interventions to target increasingly stretched resources where and when they are needed most.

    [With engagement analytics we have] a holistic view of student engagement … We have moved away from attendance at teaching as the sole measure of engagement and now take a broader view to enable us to target support and interventions.

    Richard Stock, Academic Registrar, University of Essex

    In 2018–19, 88% of students at the University of Essex identified as having low engagement at week six went on to withdraw by the end of the academic year. By 2021–22, this had reduced to approximately 20%. Staff reported more streamlined referral processes and effective targeted support thanks to engagement data.

    Bucking the trend at Keele

    The OfS continuation dashboard shows that the Integrated Foundation Year at Keele University sits 8% above the 80% threshold. Director of the Keele Foundation Year, Simon Rimmington, puts this down to how they are using student engagement data to support student success through early identification of risk.

    The enhanced data analysis undertaken by Simon and colleagues demonstrates the importance of working with students to build the right kind of academically purposeful behaviours in those first few weeks at university.

    • Withdrawal rates decreased from 21% to 9% for new students in 2023–24.
    • The success rate of students repeating a year has improved by nearly 10%.
    • Empowering staff and students with better engagement insights has fostered a more supportive and proactive learning environment.

    Moreover, by identifying students at risk of non-continuation, Keele has protected over £100K in fee income in their foundation year alone, which has been reinvested in student support services.

    Teesside University, Nottingham Trent University (NTU) and the University of the West of England (UWE) all referred explicitly to engagement analytics in their successful provider statements for TEF 2023.

    The Panel Statements for all three institutions identified the ‘very high rates of continuation’ as a ‘very high quality’ feature of their submissions.

    • Teesside’s learning environment was rated ‘outstanding’, based on their use of ‘a learner analytics system to make informed improvements’.
    • NTU cited learning analytics as the enabler for providing targeted support to students, with reduced withdrawals due to the resulting interventions.
    • UWE included ‘taking actions … to improve continuation and completion rates by proactively using learning analytics’ to evidence their approach.

    The OfS continuation dashboard backs up these claims. Table 1 highlights data for areas of concern identified by the OfS. Other areas flagged as key drivers for HEIs are also included. There is no data on entry qualifications. All figures where data is available, apart from one[1], are significantly above the 80% threshold.

    Table 1: Selected continuation figures (%) for OfS-identified areas of concern (taught, full-time first degree 2018–19 to 2021–22 entrants)

    The Tees Valley is the second most deprived of 38 English Local Enterprise Partnership areas, with a high proportion of localities among the 10% most deprived nationally. The need to support student success within this context has strongly informed Teesside University’s Access and Participation Plan.

    Engagement analytics, central to their data-led approach, ‘increases the visibility of students who need additional support with key staff members and facilitates seamless referrals and monitoring of individual student cases.’ Engagement data insights are integral to supporting students ‘on the cusp of academic failure or those with additional barriers to learning’.

    The NTU student caller team reaches out to students identified by its engagement dashboard as being at risk. They acknowledge that the intervention isn’t a panacea, but the check-in calls are appreciated by most students.

    Despite everything happening in the world, I wasn’t forgotten about or abandoned by the University.
    NTU student

    By starting with the highest risk categories, NTU has been able to focus on those most likely to benefit from additional support. And even false positives are no bad thing – better to have contact and not need it, than need it and not have it.

    What can we learn from these examples?

    Continuation rates are under threat across the sector resulting from a combination of missed or disrupted learning through Covid, followed by a cost-of-living crisis necessitating the prioritisation of work over study.

    In this messy world, data helps universities – equally challenged by rising costs and a fall in fee income – build good practice around student success activity that supports retention and continuation. These universities can take targeted action, whether individually, at cohort level or in terms of resource allocation, because they know what their real-time engagement data is showing.

    All universities cited in this blog are users of the StREAM student engagement analytics platform available from Kortext. Find out more about how your university can use StREAM to support improvements in continuation.


    [1] The Teesside University Integrated Foundation Year performs above the OfS-defined institutional benchmark value of 78.9%.

    Source link

  • Daring students to take risks and be wrong is key to solving the campus culture wars

    Daring students to take risks and be wrong is key to solving the campus culture wars

    Goodbye then, the Higher Education (Freedom of Speech) Act parts A3, A4, A7 and parts of A8 – we hardly knew you.

    The legal tort – a mechanism that seemed somehow to be designed to say “we’ve told the regulator to set up a rapid alternative mechanism to avoid having to lawyer up, but here’s a fast track way to bypass it anyway”, is to be deleted.

    The complaints scheme – a wheeze which allowed an installed Director for Freedom of Speech and Academic Freedom to rapidly rule on whatever it was that the Sunday papers were upset about that week – will now be “free” (expected) to not take up every dispute thrown its way.

    Students themselves with a complaint about a free speech issue will no longer have to flip a coin between a widely respected way of avoiding legal disputes and an untested but apparently faster one operated by the Director which was to be flagged in Freshers’ handbooks. The OIA it is.

    Foreign funding measures – bodged into the act by China hawks who could never work out whether the security services, the Foreign Office or the Department for Education were more to blame for encouraging universities to take on Chinese students – will now likely form part of the revised “Foreign Influence Registration Scheme” created by the National Security Act 2023.

    A measure banning universities from silencing victims of harassment via a non-disclosure agreement will stay, despite OfS saying it was going to ban NDAs anyway – although nobody seems able to explain why their use will still be fine for other victims with other complaints.

    And direct regulation of students’ unions – a measure that had somehow fallen for the fanciful idea that their activities are neither regulated nor controlled by powerless university managements and the Charity Commission – will also go. The “parent” institution will, as has always been the case, revert to reasonably practicable steps – like yanking its funding.

    As such, save for a new and vague duty to “promote” free speech and academic freedom, the new government’s intended partial repeal of legislation that somehow took the old one two parliaments to pass – a period of gestation that always seemed more designed to extend the issue’s prevalence in the press than to perfect its provisions – now leaves the sector largely back in the framework it’s been in for the best part of 40 years.

    That the Secretary of State says that all of the above is about proceeding in a way that “actually works” will raise an eyebrow from those who think a crisis in the academy has been growing – especially when the government’s position is that the problem to be fixed is as follows:

    In a university or a polytechnic, above all places, there should be room for discussion of all issues, for the willingness to hear and to dispute all views including those that are unpopular or eccentric or wrong.

    Actually, that was a quote from Education Secretary Keith Joseph in 1986, writing to the National Union of Students over free speech measures in the 1986 act. But Bridget Phillipson’s quote wasn’t much different:

    These fundamental freedoms are more important—much more important—than the wishes of some students not to be offended. University is a place for ideas to be exposed and debated, to be tried and tested. For young people, it is a space for horizons to be broadened, perspectives to be challenged and ideas to be examined. It is not a place for students to shut down any view with which they disagree.

    The message for vice chancellors who fail to take this seriously couldn’t have been clearer – “protect free speech on your campuses or face the consequences”. But if it’s true that for “too long, too many universities have been too relaxed about these issues”, and that “too few took them seriously enough” – what is it that that must now change?

    Back to the future

    There is no point rehearsing here the arguments that the “problem” has been overblown, centring on a handful of incidents in a part of the sector more likely to have been populated by the lawmakers and journalists whose thirst for crises to crack down on needs constant fuel. And anyway, for those on the wrong end of cancellation, the pain is real.

    There is little to be gained here from pointing out the endless inconsistencies in an agenda that seemed to have been designed to offer a simplistically minimalist definition of harassment and harm and a simplistically maximalist definition of free speech – until October 7th 2023 turned all that on its head.

    There isn’t a lot of benefit in pointing out how unhelpful the conflation between academic freedom and freedom of speech has been – one that made sense for gender-critical academics feeling the force of protest, but has been of no help for almost anyone involved in a discipline attempting to find truth in historic or systemic reasons for other equality disparities in contemporary society.

    Others write better than me, sometimes in ways I don’t recognise, sometimes in ways I do, about the way in which the need to competitively recruit students, or keep funders happy, or to not be the victim of a fresh round of course cuts inhibits challenge, drains the bravery to be unpopular, and is the real cause of a culture of “safetyism” on campus.

    And while of course it is the case that higher education isn’t what it was – which even in its “new universities” manifestations in the 1960s imagined small parts of the population engaging in small-group discussions between liberal-minded individuals able to indulge in activism before a life of elitism – I’ve grown tired of pointing out that the higher education that people sometimes call for isn’t what it is, either.

    What I’m most concerned about isn’t a nostalgic return to elite HE, or business-as-usual return to whatever it was or wasn’t done in the name of academic freedom or freedom of speech in a mass age – and nor is it whatever universities or their SUs might do to either demonstrate or promote a more complex reality. I’m most concerned about students’ confidence.

    The real crisis on campus

    Back in early 2023, we had seen surveys that told us about self-censorship, pamphlets that professed to show a culture of campus “silent” no platforming, and polling data that invited alarm at students’ apparent preference for safety rather than freedom.

    But one thing that I’d found consistently frustrating about the findings was the lack of intelligence on why students were responding the way they apparently were.

    For the endless agents drawing conclusions, it was too easy to project their own assumptions and prejudices, forged in generational memory loss and their own experiences of HE. Too easy to worry about the 14 per cent of undergrads who went on to say they didn’t feel free to express themselves in the NSS – and too easy to guess “why” that minority said so.

    As part of our work with our partners at Cibyl and a group of SUs, we polled a sample of 1,600 students and weighted for gender and age.

    We found that men were almost ten percentage points higher than women on “very free”, although there was gender consistency across the two “not free” options. Disabled students felt less free than non-disabled peers, privately educated students felt more free than those from the state system, and those eligible for means-tested bursaries were less confident than those who weren’t.

    In the stats, those who felt part of a community of students and staff were significantly more likely to feel free to express themselves than those who didn’t – and we know that it’s the socio-economic factors that are most likely to cause feelings of not “fitting in”.

    But it was the qualitative comments that stuck with me. Of those ticking one of the “not free” options, one said that because the students on their course were majority white students, they “often felt intimidated to speak about certain things”.

    Another said that northern state school students are minorities – and didn’t really have voices there:

    Tends to be posher middle class private school educated students who are heard.

    Mature students aren’t part of the majority and what I have said in the past tends to get ignored.

    Many talked about the sort of high-level technical courses that policymakers still imagine universities don’t deliver. “Engineering doesn’t leave much room for opinion like other courses”, said one. “Not a lot of room in my degree for expression” said another.

    And another gave real challenge to those in the culture wars that believe that all opinions are somehow valid:

    My course doesn’t necessarily allow me to express my freedom as everything is researched based with facts.

    Ask anyone that attempted to run a seminar on Zoom during Covid-19, and you get the same story – switched-off cameras, long silences, students seemingly afraid to say something for fear of being ostracised, or laughed at, or “getting it wrong”.

    As a former SU President put it on the site in 2023:

    This year there have been lecture halls on every campus stacked with students who don’t know how to start up a conversation with the person sat next to them. There were emails waiting to be sent, the cursor flashing at the start of a sentence, that the struggling student didn’t know how to word… This question is whether or not the next generation is actually being taught how to interact and be comfortable in their own skin… They have to if they’re claiming to.

    Freedom from fear?

    The biggest contradiction of all in both the freedom of speech and academic freedom debates that have engulfed the sector in recent years was not a lack of freedom – it was the idea that you can legislate to cause people to take advantage of it:

    In lectures and seminars there is often complete silence. The unanimity of asking a question or communicating becomes daunting when you’re the only one.

    Fear you’ll be laughed at or judged if you get it wrong

    In terms of lectures, the students in my class feel shy to share opinions which affects me when I want to share.

    Again this is a personal thing I don’t often like expressing my points of view in person to people I don’t know very well. Also they probably won’t be listened to so I don’t see the point.

    I feel very free amongst my other students in our WhatsApp groups (not governed by the university). However, freedom of expression in support sessions often ends up not occurring as everyone is anxious due to how the class has been set up.

    Once in class I simply got one word mixed up with another and the lecturer laughed and said. ‘yes…well…they do mean the same thing so that has already been stated.’ Making me and also my fellow students reluctant to ask any questions at all as we then feel some questions are ridiculous to ask. How are we to express our thoughts if we feel we will be ridiculed or made to feel ridiculous?

    For those not on programmes especially suited to endless moral and philosophical debates, a system where the time to take part in extracurriculars is squeezed by part-time work or public transport delays is not one that builds confidence to take part in them.

    The stratification of the sector – where both within universities and between them, students of a particular type and characteristic cluster in ways that few want to admit – drives a lack of diversity within the encounters that students do have in the classroom.

    And even for those whose seminars offer the opportunity for “debate”, why would you? Students have been in social media bubbles and form political opinions long before they enrol. And Leo Bursztyn and David Yang’s paper demonstrates that people think everyone in their group shares the same views, and that everyone in the outgroup believes the opposite.

    As Harvard political scientist David Deming argues here:

    Suppose a politically progressive person offers a commonly held progressive view on an issue like Israel-Palestine, affirmative action, or some other topic. Fearing social sanction, people in the out-group remain silent. But so do in-group members who disagree with their group’s stance on that particular issue. They stay silent because they assume that they are the only ones in the group who disagree, and they do not want to be isolated from their group. The only people who speak up are those who agree with the original speaker, and so the perception of in-group unanimity gets reinforced.

    Deming’s solution is that universities should tackle “pluralistic ignorance” – where most people hold an opinion privately but believe incorrectly that other people believe the opposite.

    He argues that fear of social isolation silences dissenting views within an in-group, and reinforces the belief that such views are not widely shared – and so suggests making use of classroom polling tech to elicit views anonymously, and for students to get to know each other privately first, giving people space to say things like “yes I’m progressive, but my views differ on topic X.”

    Promoting free speech?

    Within that new “promote” duty, it may be that pedagogical innovation of that sort within the curriculum will make a difference. It may also be that extracurricular innovation – from bringing seemingly opposed activist groups on campus together to listen to each other, through to carefully crafted induction talks on what free speech and academic means in practice – would help. Whether it’s possible to be positive about EDI in the face of the right to disagree with it remains to be seen.

    Upstream work on this agenda might help too – it’s odd that a “problem” that must be partly about what happens in schools and colleges is never mentioned in the APP outreach agenda, just as it’s frustrating that the surface diversity of a provider is celebrated while inside, the differences in characteristics between, say, medical students and those studying Business and Management are as vast as ever.

    Students unions – relieved of direct scrutiny on the basis that they are neither “equipped nor funded” to navigate such a complex regulatory environment – might argue that the solution is to equip them and fund them, not remove the regulation. They might also revisit work we coordinated back in 2021 – much of which was about strengthening political debate in their own structures as a way to demonstrate that democracy can work.

    Overall, though, someone somewhere is going to get something wrong again. They’ll fail to act to protect something lawful; or they’ll send a signal that something was OK, or wrong, when they should have decided the opposite.

    As such, I’ve long believed that the practice of being “wrong” needs to be role-modelled as strongly as that of being right. If universities really are spaces of debate and the lines between free speech and harassment are contested and context-specific, the sector needs to find a way to adjudicate conflict within universities rather than leaving that to the OIA, OfS, the courts or that other court of public opinion – because once it gets that far, the endless allegations of “bad faith” on both sides prevent nuance, resolution and trust.

    Perhaps internal resolution can be carried out in the way we found in use in Poland on our study tour, using trusted figures appointed from within – and perhaps it can be done by identifying types of democratic debate within both academic and corporate governance that give space to groups of staff and students with which one can agree or disagree.

    If nothing else, if Arif Ahmed is right – and “speech and expression were essential to Civil Rights protestors, just as censorship was their opponents’ most convenient weapon”, we will have to accept that “nonviolent direct action seeks to… dramatize an issue that it can no longer be ignored” – and it has as much a place on campus as the romantic ideals of a seminar room exploring nuance.

    Lightbulb moments need electricity

    But even if that helps, I’m still stuck with the horse/water/drink problem – that however much you promote the importance of something, you still need to create the conditions to take up what’s on offer. What is desired feels rich – when the contemporary student experience is often, in reality, thin. What if the real problem isn’t student protest going too far, but too few students willing to say anything out loud at all?

    Students (and their representatives) left Twitter/X/Bluesky half a decade ago, preferring the positivity of LinkedIn to being piled-onto for an opinion. Spend half an hour on Reddit’s r/UniUK and you can see it all – students terrified that one wrong move, one bad grade, one conversation taken the wrong way, one email to a tutor asking why their mark was the way it was – will lead to disaster. The stakes are too high, and the cushion for getting anything wrong too thin, to risk anything.

    Just as strong messages about the importance of extracurricular participation don’t work if you’re holding down a full-time job and live 90 minutes from campus, saying that exploring the nuances of moral and political debate is important will fall flat if you’re a first-in-family student hanging on by a thread.

    Much of this all, for me, comes back to time. Whatever else people think higher education is there to do, it only provides the opportunity to get things wrong once the pressure is off on always getting things right. Huge class sizes, that British obsession with sorting and grading rather than passing or failing, precarious employment (of staff and students) and models of student finance that render being full-time into part-time are not circumstances that lead anyone to exploring and challenging their ideas.

    Put another way, the government’s desire that higher education offers something which allows horizons to be broadened, perspectives to be challenged and ideas to be examined is laudable. But if it really wants it happen, it does have to have a much better understanding of – and a desire to improve – the hopeless precarity that students find themselves in now.

    Source link

  • AI Learning Design Workshop: Solving for CBE –

    AI Learning Design Workshop: Solving for CBE –

    I recently announced a design/build workshop series for an AI Learning Design Assistant (ALDA). The idea is simple:

    • If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
    • We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
    • Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.

    In today’s post, I’m going to provide an example of how the AI principles we will learn in the workshop series can be applied to other projects. The example I’ll use is Competency-Based Education (CBE).

    Can I please speak to your Chief Competency Officer?

    The argument for more practical, career-focused education is clear. We shouldn’t just teach the same dusty old curriculum with knowledge that students can’t put to use. We should prepare them for today’s world. Teach them competencies.

    I’m all for it. I’m on board. Count me in. I’m raising my hand.

    I just have a few questions:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted generative AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise within their own organizations?

    The sources I turn to for such information haven’t shown me that these practices are being implemented widely yet. When I read the recent publications on SkillsTech from Northeastern University’s Center for the Future of Higher Education and Talent Strategy (led by Sean Gallagher, my go-to expert on these sorts of changes), I see growing interest in skills-oriented thinking in the workplace with still-immature means for acting on that interest. At the moment, the sector seems to be very focused on building a technological factory for packaging, measuring, and communicating formally defined skills.

    But how do we know that those little packages are the ones people actually need on the job, given how quickly skills change and how fluid the need to acquire them can be? I’m not skeptical about the worthiness of the goal. I’m asking whether we are solving the hard problems that are in the way of achieving it.

    Let’s make this more personal. I was a philosophy major. I often half-joke that my education prepared me well for a career in anything except philosophy. What were the competencies I learned? I can read, write, argue, think logically, and challenge my own assumptions. I can’t get any more specific or fine-grained than that. I know I learned more specific competencies that have helped me with my career(s). But I can’t tell you what they are. Even ones that I may use regularly.

    At the same time, very few of the jobs I have held in the last 30 years existed when I was an undergraduate. I have learned many competencies since then. What are they? Well, let’s see…I know I have a list around here somewhere….

    Honestly, I have no idea. I can make up phrases for my LinkedIn profile, but I can’t give you anything remotely close to a full and authentic list of competencies I have acquired in my career. Or even ones I have acquired in the last six months. For example, I know I have acquired competencies related to AI and prompt engineering. But I can’t articulate them in useful detail without more thought and maybe some help from somebody who is trained and experienced at pulling that sort of information out of people.

    The University of Virginia already has an AI in Marketing course up on Coursera. In the next six months, Google, OpenAI, and Facebook (among others) will come out with new base models that are substantially more powerful. New tools will spring up. Practices will evolve within marketing departments. Rules will be put in place about using such tools with different marketing outlets. And so, competencies will evolve. How will the university be able to refresh that course fast enough to keep up? Where will they get their information on the latest practices? How can they edit their courses quickly enough to stay relevant?

    How can we support true Competency-Based Education if we don’t know which competencies specific humans in specific jobs need today, including competencies that didn’t exist yesterday?

    One way for AI to help

    Let’s see if we can make our absurdly challenging task of keeping an AI-in-marketing CBE course up-to-date by applying a little AI. We’ll only assume access to tools that are coming on the market now—some of which you may already be using—and ALDA.

    Every day I read about new AI capabilities for work. Many of them, interestingly, are designed to capture information and insights that would otherwise be lost. A tool to generate summaries and to-do lists from videoconferences. Another to annotate software code and explain what it does, line-by-line. One that summarizes documents, including long and technical documents, for different audiences. Every day, we generate so much information and witness so many valuable demonstrations of important skills that are just…lost. They happen and then they’re gone. If you’re not there when they happen and you don’t have the context, prior knowledge, and help to learn them, you probably won’t learn from them.

    With the AI enhancements that are being added to our productivity tools now, we can increasingly capture that information as it flies by. Zoom, Teams, Slack, and many other tools will transcribe, summarize, and analyze the knowledge in action as real people apply it in their real work.

    This is where ALDA comes in. Don’t think of ALDA as a finished, polished, carved-in-stone software application. Think of it as a working example of an application design pattern. It’s a template.

    Remember, the first step in the ALDA workflow is a series of questions that the chatbot asks the expert. In other words, it’s a learning design interview. A learning designer would normally conduct an interview with a subject-matter expert to elicit competencies. But in this case, we make use of the transcripts generated by those other AI as a direct capture of the knowledge-in-action that those interviews are designed to tease out.

    ALDA will incorporate a technique called “Retrieval-Augmented Generation,” or “RAG.” Rather than relying on—or hallucinating—the generative AI’s own internal knowledge, it can access your document store. It can help the learning designer sift through the work artifacts and identify the AI skills the marketing team had to apply when that group planned and executed their most recent social media campaign, for example.

    Using RAG and the documents we’ve captured, we develop a new interview pattern that creates a dialog between the human expert, the distilled expert practices in the document store, and the generative AI (which may be connected to the internet and have its own current knowledge). That dialogue will look a little different from the one we will script in the workshop series. But that’s the point. The script is the scaffolding for the learning design process. The generative AI in ALDA helps us execute that process, drawing on up-to-the-minute information about applied knowledge we’ve captured from subject-matter experts while they were doing their jobs.

    Behind the scenes, ALDA has been given examples of what its output should look like. Maybe those examples include well-written competencies, knowledge required to apply those competencies, and examples of those competencies being properly applied. Maybe we even wrap your ALDA examples in a technical format like Rich Skill Descriptors. Now ALDA knows what good output looks like.

    That’s the recipe. If you can use AI to get up-to-date information about the competencies you’re teaching and to convert that information into a teachable format, you’ve just created a huge shortcut. You can capture real-time workplace applied knowledge, distill it, and generate the first draft of a teachable skill.

    The workplace-university CBE pipeline

    Remember my questions early in this post? Read them again and ask yourself whether the workflow I just described could change the answers in the future:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted relevant AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise?

    With the AI-enabled workflow I described in the previous section, organizations can plausibly identify critical, up-to-date competencies as they are being used by their employees. They can share those competencies with universities, which can create and maintain up-to-date courses and certification programs. The partner organizations can work together to ensure that students and employees have opportunities to learn the latest skills as they are being practiced in the field.

    Will this new learning design process be automagic? Nope. Will it give us a robot tutor in the sky that can semi-read our minds? Nuh-uh. The human educators will still have plenty of work to do. But they’ll be performing higher-value work better and faster. The software won’t cost a bazillion dollars, you’ll understand how it works, and you can evolve it as the technology gets better and more reliable.

    Machines shouldn’t be the only ones learning

    I think I’ve discovered a competency that I’ve learned in the last six months. I’ve learned how to apply simple AI application design concepts such as RAG to develop novel and impactful solutions to business problems. (I’m sure my CBE friends could express this more precisely and usefully than I have.)

    In the months between now, when my team finishes building the first iteration of ALDA, and when the ALDA workshop participants finish the series, technology will have progressed. The big AI vendors will have released at least one generation of new, more powerful AI foundation models. New players will come on the scene. New tools will emerge. But RAG, prompt engineering, and the other skills the participants develop will still apply. ALDA itself, which will almost certainly use tools and models that haven’t been released yet, will show how the competencies we learn still apply and how they evolve in a rapidly changing world.

    I hope you’ll consider enrolling your team in the ALDA workshop series. The cost, including all source code and artifacts, is $25,000 for the team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    You can also write me directly at [email protected].

    Please join us.

    Source link