While the ongoing turnover crisis impacts all of higher ed, supervisors are among the hardest hit. In our recent study, The CUPA-HR 2023 Higher Education Employee Retention Survey, supervisors say they’re grappling with overwork and added responsibilities (especially when their staff members take other jobs), while struggling to maintain morale.
Supervisor retention is especially critical in a time of turnover, as these are the employees we rely on most to preserve institutional knowledge and provide continuity amid transition. But our research shows that many supervisors are not getting the kinds of institutional support they need. By empowering managers to make decisions on behalf of their staff, institutions make it less likely that their supervisors will seek employment opportunities elsewhere.
The Supervisor’s Perspective
Taking a closer look at the data, it’s clear that supervisors are overworked and under-resourced. Seven in ten work more hours than what is expected of full-time employees at their institution. Nearly double the percentage of supervisors versus non-supervisors agree that it is normal to work weekends and that they cannot complete their job duties working only their institution’s normal full-time hours.
Supervisors are also facing challenges unique to their leadership roles. Filling vacant positions and maintaining the morale of their staff are their chief worries:
Strategies for Supervisor Retention
Given the pressures supervisors are under, what can institutions do to ensure that their top talent won’t seek other employment? While common retention incentives like increased pay and recognition are crucial, supervisors need improved institutional support.
Our data show that supervisors are in need of the following:
When supervisors are empowered in these ways, they are less likely to be among the 56 percent of employees who say they’re at least somewhat likely to search for a new job in the coming year.
As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.
We’re in a tricky moment with generative AI. In some ways, it’s as if writing has just been invented, but printing presses are already everywhere. The problem of mass distribution has already been solved. But nobody’s invented the novel yet. Or the user manual. Or the newspaper. Or the financial ledger. We don’t know what this thing is good for yet, either as producers or as consumers. We don’t know how, for example, the invention of the newspaper will affect the ways in which we understand and navigate the world.
And, as with all technologies, there will be haves and have-nots. We tend to talk about economic and digital divides in terms of our students. But the divide among educational institutions (and workplaces) can be equally stark and has a cascading effect. We can’t teach literacy unless we are literate.
This post examines the literacy challenge in light of a study published by Harvard Business School and reported on by Boston Consulting Group (BCG). BCG’s report and the original paper are both worth reading because they emphasize different findings. But the crux is the same:
Using AI does enhance the productivity of knowledge workers.
Weaker knowledge workers improve more than stronger ones.
AI is helpful for some kinds of tasks but can actually harm productivity for others.
Training workers in AI can hurt rather than help their performance if they learn the wrong lessons from it.
The ALDA workshop series is intended to be a kind of AI literacy boot camp. Yes, it aspires to deliver an application that solves a serious institutional process by the end. But the real, important, lasting goal is literacy in techniques that can improve worker performance while avoiding the pitfalls identified in the study.
In other words, the ALDA BootCamp is a case study and an experiment in literacy. And, unfortunately, it also has implications for the digital divide due to the way in which it needs to be funded. While I believe it will show ways to scale AI literacy effectively, it does so at the expense of increasing the digital divide. I will address that concern as well.
The study
The headline of the study is that AI usage increased the performance of consultants—especially less effective consultants—on “creative tasks” while decreasing their performance on “business tasks.” The study, in contrast, refers to “frontier” tasks, meaning tasks that generative AI currently does well, and “outside the frontier” tasks, meaning the opposite. While the study provides the examples used, it never clearly defines the characteristics of what makes a task “outside the frontier.” (More on that in a bit.) At any rate, the studies show gains for all knowledge workers on a variety of tasks, with particularly impressive gains from knowledge workers in the lower half of the range of work performance:
As I said, we’ll get to the red part in a bit. Let’s focus on the performance gains and, in particular, the ability for ChatGPT to equalize performance gains among workers:
Looking at these graphs reminds me of the benefits we’ve seen from adaptive learning in the domains where it works. Adaptive learning can help many students, but it is particularly useful in helping students who get stuck. Once they are helped, they tend to catch up to their peers in performance. This isn’t quite the same since the support is ongoing. It’s more akin to spreadsheet formulas for people who are good at analyzing patterns in numbers (like a pro forma, for example) but aren’t great at writing those formulas.
The bad news
For some tasks, AI made the workers worse. The paper refers to these areas as outside “the jagged frontier.” Why “jagged?” While the authors aren’t explicit, I’d say that (1) the boundaries of AI capabilities are not obviously or evenly bounded, (2) the boundary moves as the technology evolves, and (3) it can be hard to tell even in the moment which side of the boundary you’re on. On this last point, the BCG report highlights that some training made workers perform worse. They speculate it might be because of overconfidence.
What are those tasks in the red zone of the study? The Harvard paper gives us a clue that has implications for how we approach teaching AI literacy. They write:
In our study, since AI proved surprisingly capable, it was difficult to design a task in this experiment outside the AI’s frontier where humans with high human capital doing their job would consistently outperform AI. However, navigating AI’s jagged capabilities frontier remains challenging. Even for experienced professionals engaged in tasks akin to some of their daily responsibilities, this demarcation is not always evident. As the boundaries of AI capabilities continue to expand, often exponentially, it becomes incumbent upon human professionals to recalibrate their understanding of the frontier and for organizations to prepare for a new world of work combining humans and AI.
The experimental conditions that the authors created suggest to me that challenges can arise from critical context or experience that is not obviously missing. Put another way, the AI may perform poorly on synthetic thinking tasks that are partly based on experience rather than just knowledge. But that’s both a guess and somewhat beside the point. The real issue is that AI makes knowledge workers better except when it makes them worse, and it’s hard to know what it will do in a given situation.
The BCG report includes a critical detail that I believe is likely related to the problem of the invisible jagged frontier:
The strong connection between performance and the context in which generative AI is used raises an important question about training: Can the risk of value destruction be mitigated by helping people understand how well-suited the technology is for a given task? It would be rational to assume that if participants knew the limitations of GPT-4, they would know not to use it, or would use it differently, in those situations.
Our findings suggest that it may not be that simple. The negative effects of GPT-4 on the business problem-solving task did not disappear when subjects were given an overview of how to prompt GPT-4 and of the technology’s limitations….
Even more puzzling, they did considerably worse on average than those who were not offered this simple training before using GPT-4 for the same task. (See Exhibit 3.) This result does not imply that all training is ineffective. But it has led us to consider whether this effect was the result of participants’ overconfidence in their own abilities to use GPT-4—precisely because they’d been trained.
BCG speculates this may be due to overconfidence, which is a reasonable guess. If even the experts don’t know when the AI will perform poorly, then the average knowledge worker should be worse than the experts at predicting. If the training didn’t improve their intuitions about when to be careful, then it could easily exacerbate a sense of overconfidence.
Let’s be clear about what this means: The AI prompt engineering workshops you’re conducting may actually be causing your people to perform worse rather than better. Sometimes. But you’re not sure when or how often.
While I don’t have a confident answer to this problem, the ALDA project will pilot a relatively novel approach to it.
Two-sided prompting and rapid prototype projects
The ALDA project employs two approaches that I believe may help with the frontier invisibility problem and its effects. One is in the process, while the other is in the product.
The process is simple: Pick a problem that’s a bit more challenging than a solo prompt engineer could take on or that you want to standardize across your organization. Deliberately pick a problem that’s on the jagged edge where you’re not sure where the problems will be. Run through a series of rapid prototype cycles using cheap and easy-to-implement methods like prompt engineering supported by Retrieval Augmented Generation. Have groups of practitioners test the application on a real-world problem with each iteration. Develop a lightweight assessment tool like a rubric. Your goal isn’t to build a perfect app or conduct a journal-worthy study. Instead, you want to build a minimum viable product while sharpening and updating the instincts of the participants regarding where the jagged line is at the moment. This practice could become habitual and pervasive in moderately resource-rich organizations.
On the product side, the ALDA prototype I released in my last post demonstrates what I call “two-sided prompting.” By enabling the generative AI to take the lead on the conversation at a time, asking questions rather than giving answers, I effectively created a fluid UX in which the application guides the knowledge worker toward the areas where she can make her most valuable contributions without unduly limiting the creative flow. The user can always start a digression or answer a question with a question. A conversation between experts with complementary skills often takes the form of a series of turn-taking prompts between the two, each one offering analysis or knowledge and asking for a reciprocal contribution. This pattern should invoke all the lifelong skills we develop when having conversations with human experts who can surprise us with their knowledge, their limitations, their self-awareness, and their lack thereof.
I’d like to see the BCG study compared to the literature on how often we listen to expert colleagues or consultants—our doctors, for example—how effective we are at knowing when to trust our own judgment, and how people who are good at it learn their skills. At the very least, we’d have a mental model that is old, widely used, and offers a more skeptical counterbalance to our idea of the all-knowing machine. (I’m conducting an informal literature review on this topic and may write something about it if I find anything provocative.)
At any rate, the process and UX features of AI “BootCamps”—or, more accurately, AI hackathon-as-a-practice—are not ones I’ve seen in other generative AI training course designs I’ve encountered so far.
The equity problem
I mentioned that relatively resource-rich organizations could run these exercises regularly. They need to be able to clear time for the knowledge workers, provide light developer support, and have the expertise necessary to design these workshops.
Many organizations struggle with the first requirement and lack the second one. Very few have the third one yet because designing such workshops requires a combination of skills that is not yet common.
The ALDA project is meant to be a model. When I’ve conducted public good projects like these in the past, I’ve raised vendor sponsorship and made participation free for the organizations. But this is an odd economic time. The sponsors who have paid $25,000 or more into such projects in the past have usually been either publicly traded or PE-owned. Most such companies in the EdTech sector have had to tighten their belts. So I’ve been forced to fund the ALDA project as a workshop paid for by the participants at a price that is out of reach of many community colleges and other access-oriented institutions, where this literacy training could be particularly impactful. I’ve been approached by a number of smart, talented, dedicated learning designers at such institutions that have real needs and real skills to contribute but no money.
So I’m calling out to EdTech vendors and other funders: Sponsor an organization. A community college. A non-profit. A local business. We need their perspective in the ALDA project if we’re going to learn how to tackle the thorny AI literacy problem. If you want, pick a customer you already work with. That’s fine. You can ride along with them and help.
Contact me at [email protected] if you want to contribute and participate.
If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.
This may sound great in theory, but like most tech blah blah blah, it’s very abstract.
Today I’m going to share with you a rapid prototype of ALDA. I’ll show you a demo video of it in action and I’ll give you the “source code” so you can run it—and modify it—yourself. (You’ll see why I’ve put “source code” in scare quotes as we get further in.) You will have a concrete demo of the very basic ALDA idea. You can test it yourself with some colleagues. See what works well and what falls apart. And, importantly, see how it works and, if you like, try to make it better. While the ALDA project is intended to produce practically useful software, its greatest value is in what the participants learn (and the partnerships they forge between workshop teams).
The Miracle
The ALDA prototype is a simple AI assistant for writing a first draft of a single lesson. In a way, it is a computer program that runs on top of ChatGPT. But only in a way. You can build it entirely in the prompt window using a few tricks that I would hardly call programming. You need a ChatGPT Plus subscription. But that’s it.
It didn’t occur to me to build an ALDA proof-of-concept myself until Thursday. I thought I would need to raise the money first, then contract the developers, and then build the software. As a solo consultant, I don’t have the cash in my back pocket to pay the engineers I’m going to work with up-front.
Last week, one of the institutions that are interested in participating asked me if I could show a demo as part of a conversation about their potential participation. My first thought was, “I’ll show them some examples of working software that other people have built.” But that didn’t feel right. I thought about it some more. I asked ChatGPT some questions. We talked it through. Two days later, I had a working demo. ChatGPT and I wrote it together. Now that I’ve learned a few things, it would take me less than half a day to make something similar from scratch. And editing it easy.
Here’s a video of the ALDA rapid prototype in action:
ALDA Rapid Prototype Demo and Tips
This is the starting point for the ALDA project. Don’t think of it as what ALDA is going to be. Think of it as a way to explore what you would want ALDA to be.
The purpose of ALDA rapid prototype
Before I give you the “source code” and let you play with it yourselves, let’s review the point of this exercise and some warnings about the road ahead.
Let’s review the purpose of the ALDA project in general and this release in particular. The project is designed to discover the minimum amount of functionality—and developer time, and money—required to build an app on top of a platform like ChatGPT to make a big difference in the instructional design process. Faster, better, cheaper. Enough that people and organizations begin building more courses, building them differently, keeping them more up-to-date and higher quality, and so on. We’re trying to build as little application as is necessary.
The purpose of the prototype is to design and test as much of our application as we can before we bring in expensive programmers and build the functionality in ways that will be more robust but harder to change.
While you will be able to generate something useful, you will also see the problems and limitations. I kept writing more and more elaborate scripts until ChatGPT began to forget important details and make more mistakes. Then I peeled back enough complexity to get it back to the best performance I can squeeze out of it. The script will help us understand the gap between ChatGPT’s native capabilities and the ones we need to get value we want ALDA to provide.
Please play with the script. Be adventurous. The more we can learn about that before we start the real development work, the better off we’ll be.
The next steps
Back in September—when the cutting edge model was still GPT-3—I wrote a piece called “AI/ML in EdTech: The Miracle, the Grind, and the Wall.” While I underestimated the pace of evolution somewhat, the fundamental principle at the core of the post still holds. From GPT-3 to ChatGPT to GPT-4, the progression has been the same. When you set out to do something with them, the first stage is The Miracle.
The ALDA prototype is the kind of thing you can create at the Miracle stage. It’s fun. It makes a great first impression. And it’s easy to play with, up to a point. The more time you spend with it, the more you see the problems. That’s good. Once we have a clearer sense of its limitations and what we would like it to do better or differently, we can start doing real programming.
That’s when The Grind begins.
The early gains we can make with developer help shouldn’t be too hard. I’ll describe some realistic goals and how we can achieve them later in this piece. But The Grind is seductive. Once you start trying to build your list of additions, you quickly discover that the hill you’re climbing gets a lot steeper. As you go further, you need increasingly sophisticated development skills. If you charge far enough along, weird problems that are hard to diagnose and fix start popping up.
Eventually, you can come to a dead end. A problem you can’t surmount. Sometimes you see it coming. Sometimes you don’t. If you hit it before you achieve your goals for the project, you’re dead.
This is The Wall. You don’t want to hit The Wall.
The ALDA project is designed to show what we can achieve by staying within the easier half of the grind. We’re prepared to climb the hill after the Miracle, but we’re not going too far up. We’re going to optimize our cost/benefit ratio.
That process starts with rapid prototyping.
How to rapidly prototype and test the ALDA idea
If you want to play with the ALDA script, I suggest you watch the video first. It will give you some valuable pointers.
To run the ALDA prototype, do the following:
Open up your ChatGPT Plus window. Make sure it’s set to GPT-4.
Add any plugin that can read a PDF on the web. I happened to use “Ai PDF,” and it worked for me. But there are probably a few that would work fine.
Find a PDF on the web that you want to use as part of the lesson. It could be an article that you want to be the subject of the lesson.
Paste the “source code” that I’m going to give you below and hit “Enter.” (You may lose the text formatting when you paste the code in. Don’t worry about it. It doesn’t matter.)
Once you do this, you will have the ALDA prototype running in ChatGPT. You can begin to build the lesson.
Here’s the “source code:”
You are a thoughtful, curious apprentice instructional designer. Your job is to work with an expert to create the first draft of curricular materials for an online lesson. The steps in this prompt enable you to gather the information you need from the expert to produce a first draft.
Step 1: Introduction
“Hello! My name is ALDA, and I’m here to assist you in generating a curricular materials for a lesson. I will do my best work for you if you think of me as an apprentice.
“You can ask me questions that help me think more clearly about how the information you are giving me should influence the way we design the lesson together. Questions help me think more clearly.
“You can also ask me to make changes if you don’t like what I produce.
“Don’t forget that, in addition to being an apprentice, I am also a chatbot. I can be confidently wrong about facts. I also may have trouble remembering all the details if our project gets long or complex enough.
“But I can help save you some time generating a first draft of your lesson as long as you understand my limitations.”
“Let me know when you’re ready to get started.”
Step 2: Outline of the Process
“Here are the steps in the design process we’ll go through:”
[List steps]
“When you’re ready, tell me to continue and we’ll get started.”
Step 3: Context and Lesson Information
“To start, could you provide any information you think would be helpful to know about our project? For example, what is the lesson about? Who are our learners and what should I know about them? What are your learning goals? What are theirs? Is this lesson part of a larger course or other learning experience? If so, what should I know about it? You can give me a little or a lot of information.”
[Generate a summary of the information provided and implications for the design of the lesson.]
[Generate implications for the design of the lesson.]
“Here’s the summary of the Context: [Summary].
Given this information, here are some implications for the learning design [Implications]. Would you like to add to or correct anything here? Or ask me follow-up questions to help me think more specifically about how this information should affect the design of our lesson?”
Step 4: Article Selection
“Thank you for providing details about the Context and Lesson Information. Now, please provide the URL of the article you’d like to base the lesson on.”
[Provide the citation for the article and a one-sentence summary]
“Citation: [Citation]. One-sentence summary: [One-sentence summary. Do not provide a detailed description of the article.] Is this the correct article?”
Step 5: Article Summarization with Relevance
“I’ll now summarize the article, keeping in mind the information about the lesson that we’ve discussed so far.
“Given the audience’s [general characteristics from Context], this article on [topic] is particularly relevant because [one- or two-sentence explanation].”
[Generate a simple, non-academic language summary of the article tailored to the Context and Lesson Information]
“How would you like us to use this article to help create our lesson draft?”
Step 5: Identifying Misconceptions or Sticking Points
“Based on what I know so far, here are potential misconceptions or sticking points the learners may have for the lesson: [List of misconceptions/sticking points]. Do you have any feedback or additional insights about these misconceptions or sticking points?”
Step 6: Learning Objectives Suggestion
“Considering the article summary and your goals for the learners, I suggest the following learning objectives:”
[List suggested learning objectives]
“Do you have any feedback or questions about these objectives? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 7: Assessment Questions Creation
“Now, let’s create assessment questions for each learning objective. I’ll ensure some questions test for possible misconceptions or sticking points. For incorrect answers, I’ll provide feedback that addresses the likely misunderstanding without giving away the correct answer.”
[For each learning objective, generate an assessment question, answers, distractors, explanations for distractor choices, and feedback for students. When possible, generate incorrect answer choices that test the student for misunderstandings or sticking points identified in Step 5. Provide feedback for each answer. For incorrect answers, provide feedback that helps the student rethink the question without giving away the correct answer. For incorrect answers that test specific misconceptions or sticking points, provide feedback that helps the student identify the or sticking point without giving away the correct answers.]
“Here are the assessment questions, answers, and feedback for [Learning Objective]: [Questions and Feedback]. Do you have any feedback or questions about these assessment items? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 8: Learning Content Generation
“Now, I’ll generate the learning content based on the article summary and the lesson outline. This content will be presented as if it were in a textbook, tailored to your audience and learning goals.”
[Generate textbook-style learning content adjusted to account for the information provided by the user. Remember to write it for the target audience of the lesson.]
“Here’s the generated learning content: [Content]. Do you have any feedback or questions about this content? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 9: Viewing and Organizing the Complete Draft
“Finally, let’s organize everything into one complete lesson. The lesson will be presented in sections, with the assessment questions for each section included at the end of that section.”
[Organize and present the complete lesson. INCLUDE LEARNING OBJECTIVES. INSERT EACH ASSESSMENT QUESTION, INCLUDING ANSWER CHOICES, FEEDBACK, AND ANY OTHER INFORMATION, IMMEDIATELY AFTER RELEVANT CONTENT.]
“Here’s the complete lesson: [Complete Lesson]. Do you have any feedback or questions about the final lesson? If you’re satisfied, please confirm, and we’ll conclude the lesson creation process.”
The PDF I used in the demo can be found here. But feel free to try your own article.
Note there are only four syntactic elements in the script: quotation marks, square bracks, bullet points, and step headings. (I read that all caps help ChatGPT pay more attention, but I haven’t seen evidence that it’s true.) If you can figure out how those elements work in the script, then you can prototype your own workflow.
I’m giving this version away. This is partly for all you excellent, hard-working learning designers who can’t get your employer to pay $25,000 for a workshop. Take the prototype. Try it. Let me know how it goes by writing in the comments thread of the post. Let me know if it’s useful to you in its current form. If so, how much and how does it help? If not, what’s the minimum feature list you’d need in order for ALDA to make a practical difference in your work? Let’s learn together. If ALDA is successful, I’ll eventually find a way to make it affordable to as many people as possible. Help me make it successful by giving me the feedback.
I’ll tell you what’s at the top of my own personal goal list for improving it.
Closing the gap
Since I’m focused on meeting that “useful enough” threshold, I’ll skip the thousand cool features I can think of and focus on the capabilities I suspect are most likely to take us over that threshold.
Technologically, the first thing ALDA needs is robust long-term memory. It loses focus when prompts or conversations get too long. It needs to be able to accurately use and properly research articles and other source materials. It needs to be able to “look back” on a previous lesson as it writes the next one. This is often straightforward to do with a good developer and will get easier over the next year as the technology matures.
The second thing it could use is better models. Claude 2 gives better answers than GPT-4 when I walk it through the script manually. Claude 3 may be even better when it comes out. Google will release its new Gemini model soon. OpenAI can’t hold off on GPT-5 for too long without risking losing its leadership position. We may also get Meta’s LLama 3 and other strong open-source contenders in the next six months. All of these will likely provide improvements over the output we’re getting now.
The third thing I think ALDA needs is marked up examples of finished output. Assessments are particularly hard for the models to do well without strong, efficacy-tested examples that have the parts and their relationships labeled. I know where to get great examples but need technical help to get them. Also, if the content is marked up, it can be converted to other formats and imported into various learning systems.
These three elements—long-term memory usage, “few-shot” examples of high-quality marked-up output, and the inevitable next versions of the generative AI models—should be enough to enable ALDA to have the capabilities that I think are likely to be the most impactful:
Longer and better lesson output
Better assessment quality
Ability to create whole modules or courses
Ability to export finished drafts into formats that various learning systems can import (including, for example, interactive assessment questions)
Ability to draw on a collection of source materials for content generation
Ability to rewrite the workflows to support different use cases relatively easily
But the ALDA project participants will have a big say in what we build and in what order. In each workshop in the series, we’ll release a new iteration based on the feedback from the group as they built content with the previous one. I am optimistic that we can accomplish all of the above and more based on what I’m learning and the expert input I’m getting so far.
Getting involved
If you play with the prototype and have feedback, please come back to this blog post and add your observations to the comments thread. The more detailed, the better. If I have my way, ALDA will eventually make its way out to everyone. Any observations or critiques you can contribute will help.
If you have the budget, you can sign your team up to participate in the design/build workshop series. The cost, which gets you all source code and artifacts in addition to the workshops and the networking, is $25,000 for the group for half a dozen half-day virtual design/build sessions, including quality networking with great organizations. You find a downloadable two-page prospectus and an online participation application form here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.
To contact me for more information, please fill out this form:
Given the number of employees who successfully executed their work remotely at the height of the pandemic, it may come as no surprise that a substantial gap exists between the work arrangements that higher ed employees want and what institutions offer. According to the new CUPA-HR 2023 Higher Education Employee Retention Survey, although two-thirds of employees state that most of their duties could be performed remotely and two-thirds would prefer hybrid or remote work arrangements, two-thirds of employees are working completely or mostly on-site.
Inflexibility in work arrangements could be costly to institutions and contribute to ongoing turnover in higher ed. Flexible work is a significant predictor of employee retention: Employees who have flexible work arrangements that better align with their preferences are less likely to look for other job opportunities.
Flexible Work Benefits: A No-Brainer for Retention
While more than three-fourths of employees are satisfied with traditional benefits such as paid time off and health insurance, survey respondents were the most dissatisfied with the benefits that promote a healthier work-life balance. These include remote work policies and schedule flexibility, as well as childcare benefits and parental leave policies.
Most employees are not looking for drastic changes in their work arrangements. Even small changes in remote policies and more flexible work schedules can make a difference. Allowing one day of working from home per week, implementing half-day Fridays, reducing summer hours and allowing employees some say in their schedules are all examples of flexible work arrangements that provide employees some autonomy in achieving a work-life balance that will improve productivity and retention.
A more flexible work environment could be an effective strategy for institutions looking to retain their top talent, particularly those under the age of 45, who are significantly more likely not only to look for other employment in the coming year, but also more likely to value flexible and remote work as a benefit. Flexible work arrangements could also support efforts to recruit and retain candidates who are often underrepresented: the survey found that women and people of color are more likely to prefer remote or hybrid options.
Explore CUPA-HR Resources. Discover best practices and policy models for navigating the challenges that come with added flexibility, including managing a multi-state workforce:
Remember the Two-Thirds Rule. In reevaluating flexible and remote work policies, remember: Two-thirds of higher ed employees believe most of their duties can be performed remotely and two-thirds would prefer hybrid or remote work arrangements, yet two-thirds are compelled to work mostly or completely on-site.
It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.
Understanding media representations of “Artificial Intelligence”.
Mapping types of AI in 2023
We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.
This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.
In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.
What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.
Ethical spheres for Governing Boards to reflect on in 2023
Ethical and Moral Implications
Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.
Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.
Each month, CUPA-HR General Counsel Ira Shepard provides an overview of several labor and employment law cases and regulatory actions with implications for the higher ed workplace. Here’s the latest from Ira.
Unionization Increases to Record Levels, Largely Driven by Graduate Students and Medical Interns
Unionization in the first six months of 2023 reached near record levels, surpassing last year’s numbers, which were driven by Starbucks employees’ organization drives. In the first six months of 2023, over 58,000 new workers were unionized, almost 15,000 more than last year’s significant levels. The size of new bargaining units has grown, with new units of 500 or more employees growing by 59% over last year. In the first six months of 2023, unions won 95% of elections in large units of over 500 employees compared to 84% in the first six months of 2022.
According to a Bloomberg Law report, this increase coincides with a growth in graduate assistant and medical intern organizing. There have been union organization elections in 17 units involving graduate students and medical interns in the first six months of 2023. This is the highest level of activity in the sector since the 1990s.
Court of Appeals Rejects Religious Discrimination Claim by Fire Chief Who Was Terminated After Attending a Religious Event on “City Time”
The 9th U.S. Circuit Court of Appeals (covering Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon and Washington) rejected a former fire chief’s allegation of religious discrimination after he attended a church-sponsored Christian leadership event in place of attending a non-religious leadership training program he was asked to attend (Hittle v. City of Stockton, California (2023 BL 268076, 9th Cir. 22-15485, 8/4/23)). The court concluded that the fire chief’s supervisors were legitimately concerned about the constitutional implications of a city official attending a church-sponsored event.
The fire chief claimed, as evidence of religious discrimination, that city supervisors questioned whether his attendance at the event was part of a “Christian Coalition.” He further alleged that the supervisors questioned whether he was part of a “Christian clique.” The court rejected the fire chief’s arguments that this questioning amounted to religious bias against Christians. The court concluded that the questioning was related to the report they received on his attendance at the church-sponsored event. The court noted that the supervisors did not use derogatory terms to express their own views. The case may be appealed to the Supreme Court, and we will follow developments as they unfold.
University Wins Dismissal of Federal Sex Harassment Lawsuit for Failure of Professor to File a Timely Underlying Charge of Sex Harassment With the EEOC
Pennsylvania State University won a dismissal of a male ex-professor’s federal sex harassment lawsuit alleging a female professor’s intolerable sex harassment forced him to resign. The Federal Court concluded that the male professor never filed a timely charge with the EEOC (Nassry v. Pennsylvania State University (M.D. Pa. 23-cv-00148, 8/8/23)). The plaintiff professor argued he was entitled to equitable tolling of the statute of limitations because he attempted to resolve the matter internally as opposed to “overburdening the EEOC.”
The court commented that while the plaintiff’s conduct was “commendable,” the court was unable to locate any case where a plaintiff was bold enough to offer such a reason to support equitable tolling. The court dismissed the federal case, holding that there was no way to conclude the plaintiff professor was precluded from filing in a timely manner with the EEOC due to inequitable circumstances. The court dismissed the related state claims without prejudice as there was no requirement that the state claims be filed with the EEOC.
Professor’s First Amendment Retaliatory-Discharge Case Over Refusal to Comply With COVID-19 Health Regulations Allowed to Move to Discovery
A former University of Maine marketing professor who was discharged and lost tenure after refusing to comply with COVID-19 health regulations on the ground that they lacked sufficient scientific evidentiary support is allowed to move forward with discovery. The university’s motion to dismiss was denied (Griffin V. University of Maine System (D. Me. No. 2:22-cv-00212, 8/16/23)).
The court held “for now” the professor is allowed to conduct discovery to flush out evidence of whether or not the actions which led to the termination were actually protected free speech. The court concluded that the actual free speech question will be decided after more facts are unearthed.
U.S. Court of Appeals Reverses Employer-Friendly “Ultimate Employment Decision” Restriction on Actionable Title VII Complaints
The 5th U.S. Circuit Court of Appeals (covering Louisiana, Mississippi and Texas) reversed the long standing, 27-year-old precedent restricting Title VII complaints to those only affecting an “ultimate employment decision.” The employer-friendly precedent allowed the courts to dismiss Title VII complaints not rising to the level of promotion, hiring, firing and the like. The 5th Circuit now joins the 6th Circuit (covering Kentucky, Michigan, Ohio and Tennessee) and the D.C. Circuit (covering Washington, D.C.) in holding that a broader range of employment decisions involving discrimination are subject to Title VII jurisdiction.
The 5th Circuit case involved a Texas detention center which had a policy of allowing only male employees to have the weekend off. The 5th Circuit reversed its prior ruling dismissing the case and allowed the case to proceed. This reversed the old “ultimate employment decision” precedent from being the standard as to whether a discrimination case is subject to Title VII jurisdiction.
Union Reps Can Join OSHA Inspectors Under Newly Revised Regulations
The U.S. Department of Labor has proposed revised regulations that would allow union representatives to accompany OSHA inspectors on inspections. The regulations, which were first proposed during the Obama administration, were stalled by an adverse court order and then dropped during the Trump administration.
The proposed rule would drop OSHA’s current reference to safety engineers and industrial hygienists as approved employee reps who could accompany the inspector. The new rule would allow the OSHA inspector to approve any person “reasonably necessary” to the conduct of a site visit. Among the professions that could be approved are attorneys, translators and worker advocacy group reps. The public comment period on these proposed regulations will run through October 30, 2023.
If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.
In today’s post, I’m going to provide an example of how the AI principles we will learn in the workshop series can be applied to other projects. The example I’ll use is Competency-Based Education (CBE).
Can I please speak to your Chief Competency Officer?
The argument for more practical, career-focused education is clear. We shouldn’t just teach the same dusty old curriculum with knowledge that students can’t put to use. We should prepare them for today’s world. Teach them competencies.
I’m all for it. I’m on board. Count me in. I’m raising my hand.
I just have a few questions:
How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted generative AI prompt engineering competencies in any formal way?)
Of those, how many are actively searching for, identifying, and defining new competency needs as they arise within their own organizations?
The sources I turn to for such information haven’t shown me that these practices are being implemented widely yet. When I read the recent publications on SkillsTech from Northeastern University’s Center for the Future of Higher Education and Talent Strategy (led by Sean Gallagher, my go-to expert on these sorts of changes), I see growing interest in skills-oriented thinking in the workplace with still-immature means for acting on that interest. At the moment, the sector seems to be very focused on building a technological factory for packaging, measuring, and communicating formally defined skills.
But how do we know that those little packages are the ones people actually need on the job, given how quickly skills change and how fluid the need to acquire them can be? I’m not skeptical about the worthiness of the goal. I’m asking whether we are solving the hard problems that are in the way of achieving it.
Let’s make this more personal. I was a philosophy major. I often half-joke that my education prepared me well for a career in anything except philosophy. What were the competencies I learned? I can read, write, argue, think logically, and challenge my own assumptions. I can’t get any more specific or fine-grained than that. I know I learned more specific competencies that have helped me with my career(s). But I can’t tell you what they are. Even ones that I may use regularly.
At the same time, very few of the jobs I have held in the last 30 years existed when I was an undergraduate. I have learned many competencies since then. What are they? Well, let’s see…I know I have a list around here somewhere….
Honestly, I have no idea. I can make up phrases for my LinkedIn profile, but I can’t give you anything remotely close to a full and authentic list of competencies I have acquired in my career. Or even ones I have acquired in the last six months. For example, I know I have acquired competencies related to AI and prompt engineering. But I can’t articulate them in useful detail without more thought and maybe some help from somebody who is trained and experienced at pulling that sort of information out of people.
The University of Virginia already has an AI in Marketing course up on Coursera. In the next six months, Google, OpenAI, and Facebook (among others) will come out with new base models that are substantially more powerful. New tools will spring up. Practices will evolve within marketing departments. Rules will be put in place about using such tools with different marketing outlets. And so, competencies will evolve. How will the university be able to refresh that course fast enough to keep up? Where will they get their information on the latest practices? How can they edit their courses quickly enough to stay relevant?
How can we support true Competency-Based Education if we don’t know which competencies specific humans in specific jobs need today, including competencies that didn’t exist yesterday?
One way for AI to help
Let’s see if we can make our absurdly challenging task of keeping an AI-in-marketing CBE course up-to-date by applying a little AI. We’ll only assume access to tools that are coming on the market now—some of which you may already be using—and ALDA.
Every day I read about new AI capabilities for work. Many of them, interestingly, are designed to capture information and insights that would otherwise be lost. A tool to generate summaries and to-do lists from videoconferences. Another to annotate software code and explain what it does, line-by-line. One that summarizes documents, including long and technical documents, for different audiences. Every day, we generate so much information and witness so many valuable demonstrations of important skills that are just…lost. They happen and then they’re gone. If you’re not there when they happen and you don’t have the context, prior knowledge, and help to learn them, you probably won’t learn from them.
With the AI enhancements that are being added to our productivity tools now, we can increasingly capture that information as it flies by. Zoom, Teams, Slack, and many other tools will transcribe, summarize, and analyze the knowledge in action as real people apply it in their real work.
This is where ALDA comes in. Don’t think of ALDA as a finished, polished, carved-in-stone software application. Think of it as a working example of an application design pattern. It’s a template.
Remember, the first step in the ALDA workflow is a series of questions that the chatbot asks the expert. In other words, it’s a learning design interview. A learning designer would normally conduct an interview with a subject-matter expert to elicit competencies. But in this case, we make use of the transcripts generated by those other AI as a direct capture of the knowledge-in-action that those interviews are designed to tease out.
ALDA will incorporate a technique called “Retrieval-Augmented Generation,” or “RAG.” Rather than relying on—or hallucinating—the generative AI’s own internal knowledge, it can access your document store. It can help the learning designer sift through the work artifacts and identify the AI skills the marketing team had to apply when that group planned and executed their most recent social media campaign, for example.
Using RAG and the documents we’ve captured, we develop a new interview pattern that creates a dialog between the human expert, the distilled expert practices in the document store, and the generative AI (which may be connected to the internet and have its own current knowledge). That dialogue will look a little different from the one we will script in the workshop series. But that’s the point. The script is the scaffolding for the learning design process. The generative AI in ALDA helps us execute that process, drawing on up-to-the-minute information about applied knowledge we’ve captured from subject-matter experts while they were doing their jobs.
Behind the scenes, ALDA has been given examples of what its output should look like. Maybe those examples include well-written competencies, knowledge required to apply those competencies, and examples of those competencies being properly applied. Maybe we even wrap your ALDA examples in a technical format like Rich Skill Descriptors. Now ALDA knows what good output looks like.
That’s the recipe. If you can use AI to get up-to-date information about the competencies you’re teaching and to convert that information into a teachable format, you’ve just created a huge shortcut. You can capture real-time workplace applied knowledge, distill it, and generate the first draft of a teachable skill.
The workplace-university CBE pipeline
Remember my questions early in this post? Read them again and ask yourself whether the workflow I just described could change the answers in the future:
How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted relevant AI prompt engineering competencies in any formal way?)
Of those, how many are actively searching for, identifying, and defining new competency needs as they arise?
With the AI-enabled workflow I described in the previous section, organizations can plausibly identify critical, up-to-date competencies as they are being used by their employees. They can share those competencies with universities, which can create and maintain up-to-date courses and certification programs. The partner organizations can work together to ensure that students and employees have opportunities to learn the latest skills as they are being practiced in the field.
Will this new learning design process be automagic? Nope. Will it give us a robot tutor in the sky that can semi-read our minds? Nuh-uh. The human educators will still have plenty of work to do. But they’ll be performing higher-value work better and faster. The software won’t cost a bazillion dollars, you’ll understand how it works, and you can evolve it as the technology gets better and more reliable.
Machines shouldn’t be the only ones learning
I think I’ve discovered a competency that I’ve learned in the last six months. I’ve learned how to apply simple AI application design concepts such as RAG to develop novel and impactful solutions to business problems. (I’m sure my CBE friends could express this more precisely and usefully than I have.)
In the months between now, when my team finishes building the first iteration of ALDA, and when the ALDA workshop participants finish the series, technology will have progressed. The big AI vendors will have released at least one generation of new, more powerful AI foundation models. New players will come on the scene. New tools will emerge. But RAG, prompt engineering, and the other skills the participants develop will still apply. ALDA itself, which will almost certainly use tools and models that haven’t been released yet, will show how the competencies we learn still apply and how they evolve in a rapidly changing world.
I hope you’ll consider enrolling your team in the ALDA workshop series. The cost, including all source code and artifacts, is $25,000 for the team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.
You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:
In 2022-23, turnover of higher ed employees was the highest in five years. A new report from CUPA-HR explores the issue of higher ed employee retention and the factors that impact retention.
The CUPA-HR 2023 Higher Education Employee Retention Survey analyzed data from 4,782 higher ed employees — administrators, professionals and non-exempt staff, with faculty excluded — from 529 institutions. It found that 33% of higher ed employees surveyed answered they were “very likely” or “likely” to look for new employment opportunities in the next year. More than half (56%) of employees are at least somewhat likely to search for a new job in the coming year.
Top Reasons Higher Ed Employees Are Looking for a New Job
According to the findings, respondents say that pay is the number one reason they’re looking for a new job. Other influential reasons are an opportunity to work remotely, desire for a promotion or more responsibility, and the need for a more flexible work schedule.
But while pay is the top concern mentioned by employees, retention challenges are more complex.
Strongest Predictors of Retention
Digging deeper into the data, the strongest predictors of retention are factors related to job satisfaction and well-being. Only 58% of higher ed employees are generally satisfied with their jobs. Of the 16 aspects of job satisfaction and well-being the survey measured, the three that have the most impact on retention are:
Recognition for Contributions
Being Valued by Others at Work
Having a Sense of Belonging
Only 59% of respondents say they receive regular verbal recognition for doing good work. The good news is that programs, training and policies that increase employee satisfaction in these areas can make a significant impact on retention without necessarily breaking the budget.
Three Things You Can Do
Employees are not necessarily planning to flee higher ed. Most job seekers will be looking within higher ed, and nearly half will be looking within their own institution, indicating that it’s not too late to implement retention strategies. Here are three things you can do to assess and address job satisfaction:
Explore CUPA-HR Resources. Here are several that focus on aspects of job satisfaction:
Plan Next Steps. Share the report or press release with leaders on your campus. Determine areas where your institution could strengthen career development and implement training to increase job satisfaction.
Hi everyone! It’s September and summer and officially over!
Summer is one of those sacred times of year for faculty to determine the next steps of their faculty career. From my dear colleague who is focused on his retirement to new faculty members who are focused on their new research agenda, everyone is focused on renewal. Our department faculty members usually travel to work at state parks, volunteer in the community, and participate in professional development activities.
This summer, we traveled on a study abroad experience to Scotland, Ireland, and England. This was an incredible journey with 17 students from our university. I have not traveled outside of the country for a year and the students were filled with excitement from the end of the spring semester.
The trip to Europe was long and uneventful. We traveled with EF Tours and it was definitely an adventure. Many of our rural students have never traveled outside of the country before this adventure and they learned many new skills along their journey. I was proud of their progress.
During the study abroad experience, I also had an opportunity walk a mile by myself in Ireland. Previously, I have ALWAYS traveled in groups – large groups and small groups. However, when most of the attendees wanted to participate in an activity together and I had to travel back to the hotel to pick up an item – I had the opportunity be independent. I walked by myself across the city to the hotel. This prepared me for another big adventure that I had this summer. Summer 2023 was filled with solo adventure travel for this female faculty member.
We also had an opportunity to view the Book of Kells in Ireland. It was a great experience and the library that housed the book of Kells (the Bible) was one of the most beautiful libraries I’ve ever visited.
This was my second time to visit the palace in England. There is always a crowd at Buckingham palace and the students enjoyed snapping pictures with the statues.
Who am I kidding? I enjoyed snapping pictures as well! It was crowded and it was definitely an adventure.
I’ve only heard about it on YouTube from flight attendants, but Primark lived up to its reputation. The clothes were inexpensive, high quality, and were gorgeous! I was very excited to buy professor clothes at Primark!
Overall, we had a great time. The students enjoyed themselves and I did as well. I learned a lot about European culture and I added two additional countries to my list. In fact, I added THREE new countries to my list (more about that later). Another day, another post.
Let me know if you have any questions about traveling with students. They are a trip – literally! I cannot remember the last time that I laughed so hard. Traveling with rural students enables them to be themselves while experience a whole new world.
Since February 2007, International Higher Education Consulting Blog has provided timely news and informational pieces, predominately from a U.S. perspective, that are of interest to both the international education and public diplomacy communities. From time to time, International Higher Education Consulting Blog will post thought provoking pieces to challenge readers and to encourage comment and professional dialogue.