On September 28, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) published new proposed guidance for employees and employers on navigating and preventing workplace harassment. “Enforced Guidance on Harassment in the Workplace” highlights and upholds existing federal employment discrimination laws and precedence, such as the Pregnant Workers Fairness Act (PWFA) and the Supreme Court’s Bostock v. Clayton County decision.
The Updated Guidance
The proposed enforcement guidance provides an overview and examples of situations that would constitute workplace harassment. Of particular interest are provisions included that reflect new and existing protections from harassment under federal laws and precedence, as well as emerging issues surrounding the workforce. The guidance discusses the following notable provisions for consideration:
Pregnancy, childbirth and related medical conditions. The guidance states that sex-based harassment includes harassment revolving around pregnancy, childbirth or related medical conditions, all of which are protected under federal laws like the Pregnancy Discrimination Act and the recently enacted PWFA.
Sexual orientation and gender identity. The guidance provides several examples of discrimination and harassment on the basis of sexual orientation and gender identity, which is considered sex-based discrimination under Title VII of the Civil Rights Act after the Supreme Court’s 2020 Bostock v. Clayton County decision.
Virtual and online harassment. The guidance states that conduct within a virtual work environment can contribute to a hostile environment, providing examples such as harassing comments made during remote calls or discriminatory imagery being visible in an employee’s workspace while in a work-related video call. Additionally, the guidance provides examples of conduct on social media outside of work-related contexts that may contribute to hostile work environments if such conduct impacts the workplace.
In the proposed guidance, the EEOC reminds stakeholders that the final guidance will “not have the force and effect of law” and that such guidance is “not meant to bind the public in any way.” Instead, the document “is intended only to provide clarity to the public regarding existing requirements under the law or Commission policies.”
Looking Ahead
The proposed guidance is open for public comments through November 1, 2023. Once the comment period closes, the EEOC will review all feedback they received and make changes to address the comments prior to issuing a final rule. CUPA-HR will keep members apprised of any updates on this EEOC guidance, as well as new and existing laws falling under the EEOC’s jurisdiction.
On September 28, 2023, the Department of Education released a report titled “Strategies for Increasing Diversity and Opportunity in Higher Education.” The report was issued in response to the Supreme Court’s June 2023 ruling against affirmative action in college admissions and it outlines ways institutions and states can adapt to prioritize improved accessibility to educational opportunities for underserved students.
The Report
In an introductory message for the report, Secretary of Education Matthew Cardona emphasized the enduring commitment to equal opportunity and student body diversity in higher education on behalf of his department and the president’s administration. While condemning the Supreme Court’s decision on affirmative action, Cardona pledged the Department of Education’s and the Biden administration’s support in promoting inclusivity and equity and stimulating long-term prosperity.
The Department of Education’s report centers around four areas that the administration believes institutions should consider when working to promote diversity and opportunity on campus: student recruitment, admissions, financial aid and student retention. The report focuses mostly on promoting diversity, equity and inclusion (DEI) initiatives in these areas to ensure underserved students have an equitable opportunity to be admitted into and succeed in postsecondary programs.
Relevant to higher education HR, the report discusses the need for improved training of admissions officers and other employees to ensure consistent, equitable evaluations of applicants.
Moving Forward
Prior to the release of the Supreme Court’s affirmative action decision, stakeholders also raised concerns regarding the impact such a decision could have on hiring and employment decisions as well as programs or initiatives focused on creating diverse and inclusive workplaces that align with institutional values. The decision to strike down race-based affirmative action in admissions practices could leave employers open to future legal challenges regarding their hiring decisions and other diversity programs.
CUPA-HR endorses efforts to promote inclusive communities on campuses across the nation. The government relations team continues to track developments impacting these efforts and will inform members of updates as they become available.
Culture was at the heart of the three keynote events at CUPA-HR’s 2023 national conference, which took place recently in New Orleans. Our keynote speakers asked thought-provoking questions that resonate with higher ed HR’s mission. Engaging with these questions can help you boost employee engagement, promote a culture of inclusivity and strengthen collaboration with your campus colleagues.
1. Are You Creating an Ecosystem of Opportunity?
Organizations with strong learning cultures tend to have significantly higher retention rates.
In her keynote presentation on employee retention, business strategist and author Erica Keswin pointed out that the days of climbing the same corporate ladder for 50 years are long gone. Organizations are flatter, which means you need to get creative to give people opportunities to move not only up, but sideways, helping them gain new skills and find new pathways for their careers. Instead of thinking “ladders,” Keswin said, think “lilypads.”
She also encouraged attendees to talk about employee learning opportunities early and often, beginning with their onboarding programs! Managers should be talking regularly with employees about what skills they want to learn and giving them the opportunity to learn with no strings attached.
The mission, values and priorities of higher education have learning at their core, and that culture of learning is a value proposition higher ed is uniquely positioned to provide as an employer. Make it work to your advantage by prioritizing learning and opportunity for all employees.
Another key takeaway from Keswin’s presentation was the importance of being a “human professional” and checking in with your team on a regular basis. She shared the story of a company that starts team meetings with a quick check-in called “Pick Your Nic.” Referring to a popular meme of Nicolas Cage images representing different feelings (happy, relaxed, excited, focused, stressed, meh, etc.), each person picks the Nic that represents how they’re feeling that day. The goal isn’t to address the responses in the meeting, but rather to give the team leader the opportunity to take a pulse and to give team members the opportunity to be seen and heard.
You’ll find more retention strategies in Keswin’s new book, The Retention Revolution: 7 Surprising (and Very Human!) Ways to Keep Employees Connected to Your Company. And be sure to check out the article “The Higher Ed Employee Retention Crisis — and What to Do About It” in the fall issue of Higher Ed HR Magazine.
2. Are You Treating Diversity as a Problem to Be Managed or a Value to Be Cherished?
When it comes to creating and sustaining a more inclusive culture, Princeton professor and religion scholar Dr. Eddie S. Glaude Jr. prompted attendees to consider a question: Do you view diversity as a problem to be managed or a value to be cherished?
Through a problem-solving lens, we might see diversity as a series of goals to be met and obstacles to be overcome. Through the lens of a cherished value, on the other hand, we are more likely to see every situation as an opportunity to expand and celebrate diversity of people and ideas. A problem-solving lens divides “us” from “others,” while a value-based lens sees diversity as constitutive of who we are, as a people, a country and an institution. Instead of envisioning inclusion as something undertaken in response to a mandate or in compliance with a law, what if diversity was seen as key metric of an institution’s success?
The data support the positive impact of diversity on metrics like productivity and creativity in the workplace, and Glaude urged higher education to also view diversity as an integral part of its core identity and a reflection of its regional or national reach.
To see how your institution compares to others when it comes to composition of your workforce and pay equity for employees, see the results of CUPA-HR’s signature surveys.
3. Are You Ramping Up Retention Efforts in Your Most Vulnerable Departments?
Retention and recruitment were on everyone’s mind at CUPA-HR’s annual conference. The closing panel discussion brought together leaders in student affairs, campus facilities and IT and provided insights on how HR can partner with these campus constituencies to support a culture of belonging. Here are a few of their recommendations:
Provide training opportunities.
John O’Brien, president of EDUCAUSE, which represents IT professionals in higher ed, stressed the importance of career pathways to support employees’ desire to grow in their careers.
Noting that “supervisors will make or break us,” Lander Medlin, president and CEO of APPA, which serves the needs of facilities professionals, stressed the critical role that supervisor training has on retention and workplace culture in facilities, where the aging of the skilled craft workforce has posed unique recruitment and retention challenges, and all areas.
Ensure employees feel they belong and are valued.
No matter their role on campus, employees want their opinions to be heard and valued.
Kevin Kruger, president of NASPA, the association for student affairs administrators in higher education, noted that millennial and Generation Z employees especially want to feel cared about at work and to believe their opinions matter. Today, as all student affairs professionals find themselves on the front lines of the mental health crisis, they need supervisors who have the skills to meet them where they are and to create a culture of belonging.
Medlin seconded the importance of feeling heard when it comes to job satisfaction. She would ask supervisors this question: Are you a coach and mentor, or are you a boss?
Offer job flexibility.
Some campus jobs don’t easily lend themselves to remote work, but that doesn’t mean institutions can’t build in flexibility, which CUPA-HR found is a key retention factor.
For example, facilities employees might take advantage of a compressed workweek, with employees having the option to work four 10-hour shifts.
Since student affairs professionals often work outside of a typical nine-to-five day, there’s room for remote work. In fact, students might prefer to meet with student affairs professionals remotely.
If year-round remote work isn’t a possibility, seasonal flexibility might be. When students are off campus during holiday and summer break, your staff might be able to work from home.
See employees as a strategic asset (and pay them accordingly).
The three areas represented by the panel — IT, facilities and student affairs — are among the most vulnerable to turnover and recruitment challenges on most campuses. How can HR lead the way in creating a culture that positions these employees as strategic assets? The panel offered these suggestions, based on their unique perspectives:
O’Brien encouraged satisfaction surveys. Find what’s working well and replicate it.
Kruger recommended streamlining job searches, posting salary ranges, and focusing on internal pay equity and livable wages.
Medlin asked conference attendees to help us help you. How we treat people matters, and HR leads the way in building that culture of belonging.
This blog post was contributed by Elena Lynett, JD, senior vice president at Segal, a CUPA-HR Mary Ann Wersch Premier Partner.
Institutions generally provide comprehensive mental health and substance use disorder (MH/SUD) benefits as part of their commitment to creating a safe and nurturing campus. However, the Mental Health Parity and Addiction Equity Act (MHPAEA) requires that institutions providing MH/SUD benefits ensure parity in coverage between the MH/SUD and medical/surgical benefits. The Department of Health and Human Services, the Department of Labor, and the Department of the Treasury recently proposed major changes to the MHPAEA regulations for group health plan sponsors and insurers.
The proposed changes address nonquantitative treatment limitations (NQTLs) — a term which references a wide range of medical management strategies and network administrative practices that may impact the scope or duration of MH/SUD benefits. Examples of NQTLs include prior or ongoing authorization requirements, formulary design for prescription drugs, and exclusions of specific treatments for certain conditions.
If government agencies issue a final rule similar to the proposal, plans will face additional data collection, evaluation, compliance and administrative requirements. The most significant proposed changes are:
The “predominant/substantially all” testing that currently applies to financial requirements and quantitative treatment limitations under MHPAEA would apply as a threshold test for any NQTL;
New data collection requirements, including denial rates and utilization information;
A new “meaningful benefits” standard for MH/SUD benefits;
Detailed requirements regarding the documented comparative analysis that plans must have for each applicable NQTL;
Introduction of a category of NQTLs related to network composition and new rules aimed at creating parity in medical/surgical and MH/SUD networks;
Prohibition on separate NQTLs for MH/SUD;
For plans subject to the Employee Retirement Income Security Act of 1974 (ERISA), a requirement that a named fiduciary would have to review and certify documented comparative analysis as complying with MHPAEA; and
For non-federal governmental plans, sunset of the ability to opt out of compliance with the MHPAEA rules.
The deadline to comment on the proposed rules isOctober 17, 2023. If interested, your institution may file comments here. CUPA-HR will be filing comments with other associations representing higher education and plan sponsors. As proposed, plans could be expected to comply as early as the first day of any plan year beginning on or after January 1, 2025.
Each month, CUPA-HR General Counsel Ira Shepard provides an overview of several labor and employment law cases and regulatory actions with implications for the higher ed workplace. Here’s the latest from Ira.
Governor Newsom Vetoes Bill That Would Ban Caste Discrimination
California Governor Gavin Newsom vetoed what would have been the first specific state ban on employment discrimination on the basis of caste. Seattle recently became the first U.S. municipality to ban caste discrimination. The California bill would have added caste to the definition of ancestry, which is already included in state law. The governor stated in his veto declaration that existing law already covers this type of discrimination. Commentators weighed in on both sides of this conclusion, some stating there is no specific case law on this question.
Caste is defined as a system of rigid social stratification based on a person’s birth and ancestry and primarily affects people of South Asian descent. Allegations of caste discrimination have recently arisen and gained notoriety in California’s tech industry. This proposal has been subject to much controversy in California, including a hunger strike by those supporting the proposal.
University Trustees May Be Sued for Professor’s Alleged First Amendment Claims
The 5th U.S. Circuit Court of Appeals (covering Louisiana, Mississippi and Texas) recently rejected a university board of trustees’ motion to dismiss First Amendment lawsuit allegations against them, holding that sovereign immunity did not apply to the board members (Jackson v. Wright (5th Cir., No. 22-40059, 9/15/23)).
The case involves eight members of the University of North Texas board of regents who were sued by a music professor. The professor lost his position as editor in chief of a university music journal because of alleged “racial statements” contained in an article he published in advance of a 2020 symposium sponsored by the journal.
In denying the sovereign immunity defense, the court concluded that the trustees had direct authority over university officials who denied the professor his First Amendment rights. The court noted that the trustees had refused to act on a letter the professor had submitted to the trustees raising the issue.
SEIU Local 560 Files NLRB Petition to Represent the Dartmouth College Men’s Basketball Team
To address the student-athlete employee status issue encouraged by the existing National Labor Relations Board’s general counsel, Service Employees International Union Local 560 has brought a petition to the NLRB to represent the Dartmouth College men’s basketball team in collective bargaining negotiation with the institution. This is nearly a decade after the NLRB denied jurisdiction over student athletes in the Northwestern case. If the SEIU is successful, it would be the first case involving potential unionization of college athletes.
The filing follows on the heels of the favorable Supreme Court decision striking down the NCAA’s ban on compensation of student-athletes for name, image and likeness in the 2021 case NCAA v. Alston. While the Supreme Court did not address the labor organizing question under the National Labor Relations Act for student athletes, it certainly took the first step in recognizing the group as employees.
This case brings an added mechanism for the NLRB to decide whether student-athletes are protected under the NLRA and able to organize into labor unions. The NLRB’s general counsel already raised the issue in May of this year in the case brought against the University of Southern California, the Pac-12 Conference, and the NCAA, in which they are alleged to have violated the NLRA in failing to recognize student-athletes as employees.
On the first day of the NLRB hearing, Dartmouth took the position that the athletes involved are students who do not meet any of the common law attributes of employees and, therefore, are not union-eligible employees under the NLRA.
Undergraduate Student-Employee Union Organizing Is Expanding, Leading the Way to More Organization Drives
Bloomberg reports that there are now over a dozen colleges in the U.S. with undergraduate student-employee unions. This is up from just two before 2022. Pay, sick leave and insecurity due to the COVID-19 pandemic have been reported as reasons prompting this significant increase in undergraduate employee organizing, which appears to be motivating expanded organizing at the graduate assistant and professor levels.
A union-organizing campaign appears to be proceeding across campus lines at the California State University System, where a union is organizing as many as 20,000 undergraduate workers at 23 campuses, Bloomberg reports. Separately, 4,000 University of Oregon student employees are set to vote next month on union representation.
Fired Football Coach Sues University, Seeks $130 Million in Damages
A former Northwestern University football coach has sued the university and its president for wrongful discharge and defamation and is seeking a minimum of $130 million in damages. The lawsuit alleges that the coach was fired for “no reason whatsoever.”
The coach was placed on a two-week unpaid suspension after a six-month investigation revealed incidents of hazing within the football program. The report was allegedly inconclusive as to whether the coaches were aware of the hazing. Details of the actual termination will be the subject of the trial. We will follow developments as they unfold.
Earlier this year, I had the pleasure of consulting for the Education Design Lab (EDL) on their search for a Learning Management System (LMS) that would accommodate Competency-Based Education (CBE). While many platforms, especially in the corporate Learning and Development space, talked about skill tracking and pathways in their marketing, the EDL team found a bewildering array of options that looked good in theory but failed in practice. My job was to help them separate the signal from the noise.
It turns out that only a few defining architectural features of an LMS will determine its fitness for CBE. These features are significant but not prohibitive development efforts. Rather, many of the firms we talked to, once they understood the true core requirements, said they could modify their platforms to accommodate CBE but do not currently see enough demand among customers to invest the resources required.
This white paper, which outlines the architectural principles I discovered during the engagement, is based on my consulting work with EDL and is released with their blessing. In addition to the white paper itself, I provide some suggestions for how to move the vendors and a few comments about other missing pieces in the CBE ecosystem that may be underappreciated.
The core principles
The four basic principles for an LMS or learning platform to support CBE are simple:
Separate skill tree: Most systems have learning objectives that are attached to individual courses. The course is about the learning objectives. One of the goals of CBE is to create more granular tracking of progress that may run across courses. A skill learned in one course may count toward another. So a CBE platform must include a skill tree as a first-class citizen of the architecture, separate from the course.
Mastery learning: This heading includes a range of features, from standardized and simplified grading (e.g., competent/non-yet) to gates in which learners may only pass to the next competency after mastering the one they’re on. Many learning platforms already have these features. But they are not tied to a separate skill tree in a coherent way that supports mastery learning. This is not a huge development effort if the skill tree exists. And in a true CBE platform, it could mean being able to get rid of the grade book, which is a hideous, painful, never-ending time sink for LMS product developers.
Integration: In a traditional learning platform, the main integration points are with the registrar or talent management system (tracking registrations and final scores) and external tools that plug into the environment. A CBE platform must import skills, export evidence of achievement, and sometimes work as a delivery platform that gets wrapped into somebody else’s LMS (e.g., a university course built and run on their learning platform but appearing in a window of a corporate client’s learning platform). Most of these are not hard if the first two requirements are developed but they can require significant amounts of developer time.
Evidence of achievement: CBE standards increasingly lean toward rich packages that provide not only certification of achievement but also evidence of it. That means the learner’s work must be exportable. This can get complicated, particularly if third-party tools are integrated to provide authentic assessments.
The second one is tricky to even characterize but it has to do with the content production pipeline. Curricular materials publishers, by and large, are not building their products in CBE-friendly ways. Between the weak third-party content pipeline and the chronic shortage of learning design talent relative to the need, CBE-focused institutions often either tie themselves in knots trying to solve this problem or throw up their hands, focusing on authentic certification and mentoring. But there’s a limit to how much you can improve retention and completion rates if you don’t have strong learning experiences, including formative assessments that enable you to track students’ progress toward competency, address the sticking points in learning particular skills, and so on. This is a tough bind since institutions can’t ignore the quality of learning materials, can’t rely on third parties, and can’t keep up with demand themselves.
Adding to this problem is a tendency to follow the CBE yellow brick road to what may look like its logical conclusion of atomizing everything. I’m talking about reusable learning objects. I first started experimenting with them at scale in 1998. By 2002, I had given up, writing instead about instructional design techniques to make recyclable learning objects. And that was within corporate training—as it is, not as we imagine it—which tends to focus on a handful of relatively low-level skills for limited and well-defined populations. The lack of a healthy Learning Object Repository (LOR) market should tell us something about how well reusable learning object strategy holds up under stress.
And yet, CBE enthusiasts continue to find it attractive. In theory, it fits well with the view of smaller learning chunks that show up in multiple contexts. In practice, the LOR usually does not solve the right problems in the right way. Version control, discoverability, learning chunk size, and reusability are all real problems that have to be addressed. But because real-world learning design needs often can’t be met with content legos, starting from a LOR and adding complexity to fix its shortcomings usually brings a lot of pain without commensurate gain.
There is a path through this architectural mess, just like there is a path through the learning platform mess. But it’s a complicated one that I won’t lay out in detail here.
While the ongoing turnover crisis impacts all of higher ed, supervisors are among the hardest hit. In our recent study, The CUPA-HR 2023 Higher Education Employee Retention Survey, supervisors say they’re grappling with overwork and added responsibilities (especially when their staff members take other jobs), while struggling to maintain morale.
Supervisor retention is especially critical in a time of turnover, as these are the employees we rely on most to preserve institutional knowledge and provide continuity amid transition. But our research shows that many supervisors are not getting the kinds of institutional support they need. By empowering managers to make decisions on behalf of their staff, institutions make it less likely that their supervisors will seek employment opportunities elsewhere.
The Supervisor’s Perspective
Taking a closer look at the data, it’s clear that supervisors are overworked and under-resourced. Seven in ten work more hours than what is expected of full-time employees at their institution. Nearly double the percentage of supervisors versus non-supervisors agree that it is normal to work weekends and that they cannot complete their job duties working only their institution’s normal full-time hours.
Supervisors are also facing challenges unique to their leadership roles. Filling vacant positions and maintaining the morale of their staff are their chief worries:
Strategies for Supervisor Retention
Given the pressures supervisors are under, what can institutions do to ensure that their top talent won’t seek other employment? While common retention incentives like increased pay and recognition are crucial, supervisors need improved institutional support.
Our data show that supervisors are in need of the following:
When supervisors are empowered in these ways, they are less likely to be among the 56 percent of employees who say they’re at least somewhat likely to search for a new job in the coming year.
As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.
We’re in a tricky moment with generative AI. In some ways, it’s as if writing has just been invented, but printing presses are already everywhere. The problem of mass distribution has already been solved. But nobody’s invented the novel yet. Or the user manual. Or the newspaper. Or the financial ledger. We don’t know what this thing is good for yet, either as producers or as consumers. We don’t know how, for example, the invention of the newspaper will affect the ways in which we understand and navigate the world.
And, as with all technologies, there will be haves and have-nots. We tend to talk about economic and digital divides in terms of our students. But the divide among educational institutions (and workplaces) can be equally stark and has a cascading effect. We can’t teach literacy unless we are literate.
This post examines the literacy challenge in light of a study published by Harvard Business School and reported on by Boston Consulting Group (BCG). BCG’s report and the original paper are both worth reading because they emphasize different findings. But the crux is the same:
Using AI does enhance the productivity of knowledge workers.
Weaker knowledge workers improve more than stronger ones.
AI is helpful for some kinds of tasks but can actually harm productivity for others.
Training workers in AI can hurt rather than help their performance if they learn the wrong lessons from it.
The ALDA workshop series is intended to be a kind of AI literacy boot camp. Yes, it aspires to deliver an application that solves a serious institutional process by the end. But the real, important, lasting goal is literacy in techniques that can improve worker performance while avoiding the pitfalls identified in the study.
In other words, the ALDA BootCamp is a case study and an experiment in literacy. And, unfortunately, it also has implications for the digital divide due to the way in which it needs to be funded. While I believe it will show ways to scale AI literacy effectively, it does so at the expense of increasing the digital divide. I will address that concern as well.
The study
The headline of the study is that AI usage increased the performance of consultants—especially less effective consultants—on “creative tasks” while decreasing their performance on “business tasks.” The study, in contrast, refers to “frontier” tasks, meaning tasks that generative AI currently does well, and “outside the frontier” tasks, meaning the opposite. While the study provides the examples used, it never clearly defines the characteristics of what makes a task “outside the frontier.” (More on that in a bit.) At any rate, the studies show gains for all knowledge workers on a variety of tasks, with particularly impressive gains from knowledge workers in the lower half of the range of work performance:
As I said, we’ll get to the red part in a bit. Let’s focus on the performance gains and, in particular, the ability for ChatGPT to equalize performance gains among workers:
Looking at these graphs reminds me of the benefits we’ve seen from adaptive learning in the domains where it works. Adaptive learning can help many students, but it is particularly useful in helping students who get stuck. Once they are helped, they tend to catch up to their peers in performance. This isn’t quite the same since the support is ongoing. It’s more akin to spreadsheet formulas for people who are good at analyzing patterns in numbers (like a pro forma, for example) but aren’t great at writing those formulas.
The bad news
For some tasks, AI made the workers worse. The paper refers to these areas as outside “the jagged frontier.” Why “jagged?” While the authors aren’t explicit, I’d say that (1) the boundaries of AI capabilities are not obviously or evenly bounded, (2) the boundary moves as the technology evolves, and (3) it can be hard to tell even in the moment which side of the boundary you’re on. On this last point, the BCG report highlights that some training made workers perform worse. They speculate it might be because of overconfidence.
What are those tasks in the red zone of the study? The Harvard paper gives us a clue that has implications for how we approach teaching AI literacy. They write:
In our study, since AI proved surprisingly capable, it was difficult to design a task in this experiment outside the AI’s frontier where humans with high human capital doing their job would consistently outperform AI. However, navigating AI’s jagged capabilities frontier remains challenging. Even for experienced professionals engaged in tasks akin to some of their daily responsibilities, this demarcation is not always evident. As the boundaries of AI capabilities continue to expand, often exponentially, it becomes incumbent upon human professionals to recalibrate their understanding of the frontier and for organizations to prepare for a new world of work combining humans and AI.
The experimental conditions that the authors created suggest to me that challenges can arise from critical context or experience that is not obviously missing. Put another way, the AI may perform poorly on synthetic thinking tasks that are partly based on experience rather than just knowledge. But that’s both a guess and somewhat beside the point. The real issue is that AI makes knowledge workers better except when it makes them worse, and it’s hard to know what it will do in a given situation.
The BCG report includes a critical detail that I believe is likely related to the problem of the invisible jagged frontier:
The strong connection between performance and the context in which generative AI is used raises an important question about training: Can the risk of value destruction be mitigated by helping people understand how well-suited the technology is for a given task? It would be rational to assume that if participants knew the limitations of GPT-4, they would know not to use it, or would use it differently, in those situations.
Our findings suggest that it may not be that simple. The negative effects of GPT-4 on the business problem-solving task did not disappear when subjects were given an overview of how to prompt GPT-4 and of the technology’s limitations….
Even more puzzling, they did considerably worse on average than those who were not offered this simple training before using GPT-4 for the same task. (See Exhibit 3.) This result does not imply that all training is ineffective. But it has led us to consider whether this effect was the result of participants’ overconfidence in their own abilities to use GPT-4—precisely because they’d been trained.
BCG speculates this may be due to overconfidence, which is a reasonable guess. If even the experts don’t know when the AI will perform poorly, then the average knowledge worker should be worse than the experts at predicting. If the training didn’t improve their intuitions about when to be careful, then it could easily exacerbate a sense of overconfidence.
Let’s be clear about what this means: The AI prompt engineering workshops you’re conducting may actually be causing your people to perform worse rather than better. Sometimes. But you’re not sure when or how often.
While I don’t have a confident answer to this problem, the ALDA project will pilot a relatively novel approach to it.
Two-sided prompting and rapid prototype projects
The ALDA project employs two approaches that I believe may help with the frontier invisibility problem and its effects. One is in the process, while the other is in the product.
The process is simple: Pick a problem that’s a bit more challenging than a solo prompt engineer could take on or that you want to standardize across your organization. Deliberately pick a problem that’s on the jagged edge where you’re not sure where the problems will be. Run through a series of rapid prototype cycles using cheap and easy-to-implement methods like prompt engineering supported by Retrieval Augmented Generation. Have groups of practitioners test the application on a real-world problem with each iteration. Develop a lightweight assessment tool like a rubric. Your goal isn’t to build a perfect app or conduct a journal-worthy study. Instead, you want to build a minimum viable product while sharpening and updating the instincts of the participants regarding where the jagged line is at the moment. This practice could become habitual and pervasive in moderately resource-rich organizations.
On the product side, the ALDA prototype I released in my last post demonstrates what I call “two-sided prompting.” By enabling the generative AI to take the lead on the conversation at a time, asking questions rather than giving answers, I effectively created a fluid UX in which the application guides the knowledge worker toward the areas where she can make her most valuable contributions without unduly limiting the creative flow. The user can always start a digression or answer a question with a question. A conversation between experts with complementary skills often takes the form of a series of turn-taking prompts between the two, each one offering analysis or knowledge and asking for a reciprocal contribution. This pattern should invoke all the lifelong skills we develop when having conversations with human experts who can surprise us with their knowledge, their limitations, their self-awareness, and their lack thereof.
I’d like to see the BCG study compared to the literature on how often we listen to expert colleagues or consultants—our doctors, for example—how effective we are at knowing when to trust our own judgment, and how people who are good at it learn their skills. At the very least, we’d have a mental model that is old, widely used, and offers a more skeptical counterbalance to our idea of the all-knowing machine. (I’m conducting an informal literature review on this topic and may write something about it if I find anything provocative.)
At any rate, the process and UX features of AI “BootCamps”—or, more accurately, AI hackathon-as-a-practice—are not ones I’ve seen in other generative AI training course designs I’ve encountered so far.
The equity problem
I mentioned that relatively resource-rich organizations could run these exercises regularly. They need to be able to clear time for the knowledge workers, provide light developer support, and have the expertise necessary to design these workshops.
Many organizations struggle with the first requirement and lack the second one. Very few have the third one yet because designing such workshops requires a combination of skills that is not yet common.
The ALDA project is meant to be a model. When I’ve conducted public good projects like these in the past, I’ve raised vendor sponsorship and made participation free for the organizations. But this is an odd economic time. The sponsors who have paid $25,000 or more into such projects in the past have usually been either publicly traded or PE-owned. Most such companies in the EdTech sector have had to tighten their belts. So I’ve been forced to fund the ALDA project as a workshop paid for by the participants at a price that is out of reach of many community colleges and other access-oriented institutions, where this literacy training could be particularly impactful. I’ve been approached by a number of smart, talented, dedicated learning designers at such institutions that have real needs and real skills to contribute but no money.
So I’m calling out to EdTech vendors and other funders: Sponsor an organization. A community college. A non-profit. A local business. We need their perspective in the ALDA project if we’re going to learn how to tackle the thorny AI literacy problem. If you want, pick a customer you already work with. That’s fine. You can ride along with them and help.
Contact me at [email protected] if you want to contribute and participate.
If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.
This may sound great in theory, but like most tech blah blah blah, it’s very abstract.
Today I’m going to share with you a rapid prototype of ALDA. I’ll show you a demo video of it in action and I’ll give you the “source code” so you can run it—and modify it—yourself. (You’ll see why I’ve put “source code” in scare quotes as we get further in.) You will have a concrete demo of the very basic ALDA idea. You can test it yourself with some colleagues. See what works well and what falls apart. And, importantly, see how it works and, if you like, try to make it better. While the ALDA project is intended to produce practically useful software, its greatest value is in what the participants learn (and the partnerships they forge between workshop teams).
The Miracle
The ALDA prototype is a simple AI assistant for writing a first draft of a single lesson. In a way, it is a computer program that runs on top of ChatGPT. But only in a way. You can build it entirely in the prompt window using a few tricks that I would hardly call programming. You need a ChatGPT Plus subscription. But that’s it.
It didn’t occur to me to build an ALDA proof-of-concept myself until Thursday. I thought I would need to raise the money first, then contract the developers, and then build the software. As a solo consultant, I don’t have the cash in my back pocket to pay the engineers I’m going to work with up-front.
Last week, one of the institutions that are interested in participating asked me if I could show a demo as part of a conversation about their potential participation. My first thought was, “I’ll show them some examples of working software that other people have built.” But that didn’t feel right. I thought about it some more. I asked ChatGPT some questions. We talked it through. Two days later, I had a working demo. ChatGPT and I wrote it together. Now that I’ve learned a few things, it would take me less than half a day to make something similar from scratch. And editing it easy.
Here’s a video of the ALDA rapid prototype in action:
ALDA Rapid Prototype Demo and Tips
This is the starting point for the ALDA project. Don’t think of it as what ALDA is going to be. Think of it as a way to explore what you would want ALDA to be.
The purpose of ALDA rapid prototype
Before I give you the “source code” and let you play with it yourselves, let’s review the point of this exercise and some warnings about the road ahead.
Let’s review the purpose of the ALDA project in general and this release in particular. The project is designed to discover the minimum amount of functionality—and developer time, and money—required to build an app on top of a platform like ChatGPT to make a big difference in the instructional design process. Faster, better, cheaper. Enough that people and organizations begin building more courses, building them differently, keeping them more up-to-date and higher quality, and so on. We’re trying to build as little application as is necessary.
The purpose of the prototype is to design and test as much of our application as we can before we bring in expensive programmers and build the functionality in ways that will be more robust but harder to change.
While you will be able to generate something useful, you will also see the problems and limitations. I kept writing more and more elaborate scripts until ChatGPT began to forget important details and make more mistakes. Then I peeled back enough complexity to get it back to the best performance I can squeeze out of it. The script will help us understand the gap between ChatGPT’s native capabilities and the ones we need to get value we want ALDA to provide.
Please play with the script. Be adventurous. The more we can learn about that before we start the real development work, the better off we’ll be.
The next steps
Back in September—when the cutting edge model was still GPT-3—I wrote a piece called “AI/ML in EdTech: The Miracle, the Grind, and the Wall.” While I underestimated the pace of evolution somewhat, the fundamental principle at the core of the post still holds. From GPT-3 to ChatGPT to GPT-4, the progression has been the same. When you set out to do something with them, the first stage is The Miracle.
The ALDA prototype is the kind of thing you can create at the Miracle stage. It’s fun. It makes a great first impression. And it’s easy to play with, up to a point. The more time you spend with it, the more you see the problems. That’s good. Once we have a clearer sense of its limitations and what we would like it to do better or differently, we can start doing real programming.
That’s when The Grind begins.
The early gains we can make with developer help shouldn’t be too hard. I’ll describe some realistic goals and how we can achieve them later in this piece. But The Grind is seductive. Once you start trying to build your list of additions, you quickly discover that the hill you’re climbing gets a lot steeper. As you go further, you need increasingly sophisticated development skills. If you charge far enough along, weird problems that are hard to diagnose and fix start popping up.
Eventually, you can come to a dead end. A problem you can’t surmount. Sometimes you see it coming. Sometimes you don’t. If you hit it before you achieve your goals for the project, you’re dead.
This is The Wall. You don’t want to hit The Wall.
The ALDA project is designed to show what we can achieve by staying within the easier half of the grind. We’re prepared to climb the hill after the Miracle, but we’re not going too far up. We’re going to optimize our cost/benefit ratio.
That process starts with rapid prototyping.
How to rapidly prototype and test the ALDA idea
If you want to play with the ALDA script, I suggest you watch the video first. It will give you some valuable pointers.
To run the ALDA prototype, do the following:
Open up your ChatGPT Plus window. Make sure it’s set to GPT-4.
Add any plugin that can read a PDF on the web. I happened to use “Ai PDF,” and it worked for me. But there are probably a few that would work fine.
Find a PDF on the web that you want to use as part of the lesson. It could be an article that you want to be the subject of the lesson.
Paste the “source code” that I’m going to give you below and hit “Enter.” (You may lose the text formatting when you paste the code in. Don’t worry about it. It doesn’t matter.)
Once you do this, you will have the ALDA prototype running in ChatGPT. You can begin to build the lesson.
Here’s the “source code:”
You are a thoughtful, curious apprentice instructional designer. Your job is to work with an expert to create the first draft of curricular materials for an online lesson. The steps in this prompt enable you to gather the information you need from the expert to produce a first draft.
Step 1: Introduction
“Hello! My name is ALDA, and I’m here to assist you in generating a curricular materials for a lesson. I will do my best work for you if you think of me as an apprentice.
“You can ask me questions that help me think more clearly about how the information you are giving me should influence the way we design the lesson together. Questions help me think more clearly.
“You can also ask me to make changes if you don’t like what I produce.
“Don’t forget that, in addition to being an apprentice, I am also a chatbot. I can be confidently wrong about facts. I also may have trouble remembering all the details if our project gets long or complex enough.
“But I can help save you some time generating a first draft of your lesson as long as you understand my limitations.”
“Let me know when you’re ready to get started.”
Step 2: Outline of the Process
“Here are the steps in the design process we’ll go through:”
[List steps]
“When you’re ready, tell me to continue and we’ll get started.”
Step 3: Context and Lesson Information
“To start, could you provide any information you think would be helpful to know about our project? For example, what is the lesson about? Who are our learners and what should I know about them? What are your learning goals? What are theirs? Is this lesson part of a larger course or other learning experience? If so, what should I know about it? You can give me a little or a lot of information.”
[Generate a summary of the information provided and implications for the design of the lesson.]
[Generate implications for the design of the lesson.]
“Here’s the summary of the Context: [Summary].
Given this information, here are some implications for the learning design [Implications]. Would you like to add to or correct anything here? Or ask me follow-up questions to help me think more specifically about how this information should affect the design of our lesson?”
Step 4: Article Selection
“Thank you for providing details about the Context and Lesson Information. Now, please provide the URL of the article you’d like to base the lesson on.”
[Provide the citation for the article and a one-sentence summary]
“Citation: [Citation]. One-sentence summary: [One-sentence summary. Do not provide a detailed description of the article.] Is this the correct article?”
Step 5: Article Summarization with Relevance
“I’ll now summarize the article, keeping in mind the information about the lesson that we’ve discussed so far.
“Given the audience’s [general characteristics from Context], this article on [topic] is particularly relevant because [one- or two-sentence explanation].”
[Generate a simple, non-academic language summary of the article tailored to the Context and Lesson Information]
“How would you like us to use this article to help create our lesson draft?”
Step 5: Identifying Misconceptions or Sticking Points
“Based on what I know so far, here are potential misconceptions or sticking points the learners may have for the lesson: [List of misconceptions/sticking points]. Do you have any feedback or additional insights about these misconceptions or sticking points?”
Step 6: Learning Objectives Suggestion
“Considering the article summary and your goals for the learners, I suggest the following learning objectives:”
[List suggested learning objectives]
“Do you have any feedback or questions about these objectives? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 7: Assessment Questions Creation
“Now, let’s create assessment questions for each learning objective. I’ll ensure some questions test for possible misconceptions or sticking points. For incorrect answers, I’ll provide feedback that addresses the likely misunderstanding without giving away the correct answer.”
[For each learning objective, generate an assessment question, answers, distractors, explanations for distractor choices, and feedback for students. When possible, generate incorrect answer choices that test the student for misunderstandings or sticking points identified in Step 5. Provide feedback for each answer. For incorrect answers, provide feedback that helps the student rethink the question without giving away the correct answer. For incorrect answers that test specific misconceptions or sticking points, provide feedback that helps the student identify the or sticking point without giving away the correct answers.]
“Here are the assessment questions, answers, and feedback for [Learning Objective]: [Questions and Feedback]. Do you have any feedback or questions about these assessment items? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 8: Learning Content Generation
“Now, I’ll generate the learning content based on the article summary and the lesson outline. This content will be presented as if it were in a textbook, tailored to your audience and learning goals.”
[Generate textbook-style learning content adjusted to account for the information provided by the user. Remember to write it for the target audience of the lesson.]
“Here’s the generated learning content: [Content]. Do you have any feedback or questions about this content? If you’re satisfied, please tell me to ‘Continue to the next step.’”
Step 9: Viewing and Organizing the Complete Draft
“Finally, let’s organize everything into one complete lesson. The lesson will be presented in sections, with the assessment questions for each section included at the end of that section.”
[Organize and present the complete lesson. INCLUDE LEARNING OBJECTIVES. INSERT EACH ASSESSMENT QUESTION, INCLUDING ANSWER CHOICES, FEEDBACK, AND ANY OTHER INFORMATION, IMMEDIATELY AFTER RELEVANT CONTENT.]
“Here’s the complete lesson: [Complete Lesson]. Do you have any feedback or questions about the final lesson? If you’re satisfied, please confirm, and we’ll conclude the lesson creation process.”
The PDF I used in the demo can be found here. But feel free to try your own article.
Note there are only four syntactic elements in the script: quotation marks, square bracks, bullet points, and step headings. (I read that all caps help ChatGPT pay more attention, but I haven’t seen evidence that it’s true.) If you can figure out how those elements work in the script, then you can prototype your own workflow.
I’m giving this version away. This is partly for all you excellent, hard-working learning designers who can’t get your employer to pay $25,000 for a workshop. Take the prototype. Try it. Let me know how it goes by writing in the comments thread of the post. Let me know if it’s useful to you in its current form. If so, how much and how does it help? If not, what’s the minimum feature list you’d need in order for ALDA to make a practical difference in your work? Let’s learn together. If ALDA is successful, I’ll eventually find a way to make it affordable to as many people as possible. Help me make it successful by giving me the feedback.
I’ll tell you what’s at the top of my own personal goal list for improving it.
Closing the gap
Since I’m focused on meeting that “useful enough” threshold, I’ll skip the thousand cool features I can think of and focus on the capabilities I suspect are most likely to take us over that threshold.
Technologically, the first thing ALDA needs is robust long-term memory. It loses focus when prompts or conversations get too long. It needs to be able to accurately use and properly research articles and other source materials. It needs to be able to “look back” on a previous lesson as it writes the next one. This is often straightforward to do with a good developer and will get easier over the next year as the technology matures.
The second thing it could use is better models. Claude 2 gives better answers than GPT-4 when I walk it through the script manually. Claude 3 may be even better when it comes out. Google will release its new Gemini model soon. OpenAI can’t hold off on GPT-5 for too long without risking losing its leadership position. We may also get Meta’s LLama 3 and other strong open-source contenders in the next six months. All of these will likely provide improvements over the output we’re getting now.
The third thing I think ALDA needs is marked up examples of finished output. Assessments are particularly hard for the models to do well without strong, efficacy-tested examples that have the parts and their relationships labeled. I know where to get great examples but need technical help to get them. Also, if the content is marked up, it can be converted to other formats and imported into various learning systems.
These three elements—long-term memory usage, “few-shot” examples of high-quality marked-up output, and the inevitable next versions of the generative AI models—should be enough to enable ALDA to have the capabilities that I think are likely to be the most impactful:
Longer and better lesson output
Better assessment quality
Ability to create whole modules or courses
Ability to export finished drafts into formats that various learning systems can import (including, for example, interactive assessment questions)
Ability to draw on a collection of source materials for content generation
Ability to rewrite the workflows to support different use cases relatively easily
But the ALDA project participants will have a big say in what we build and in what order. In each workshop in the series, we’ll release a new iteration based on the feedback from the group as they built content with the previous one. I am optimistic that we can accomplish all of the above and more based on what I’m learning and the expert input I’m getting so far.
Getting involved
If you play with the prototype and have feedback, please come back to this blog post and add your observations to the comments thread. The more detailed, the better. If I have my way, ALDA will eventually make its way out to everyone. Any observations or critiques you can contribute will help.
If you have the budget, you can sign your team up to participate in the design/build workshop series. The cost, which gets you all source code and artifacts in addition to the workshops and the networking, is $25,000 for the group for half a dozen half-day virtual design/build sessions, including quality networking with great organizations. You find a downloadable two-page prospectus and an online participation application form here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.
To contact me for more information, please fill out this form:
Given the number of employees who successfully executed their work remotely at the height of the pandemic, it may come as no surprise that a substantial gap exists between the work arrangements that higher ed employees want and what institutions offer. According to the new CUPA-HR 2023 Higher Education Employee Retention Survey, although two-thirds of employees state that most of their duties could be performed remotely and two-thirds would prefer hybrid or remote work arrangements, two-thirds of employees are working completely or mostly on-site.
Inflexibility in work arrangements could be costly to institutions and contribute to ongoing turnover in higher ed. Flexible work is a significant predictor of employee retention: Employees who have flexible work arrangements that better align with their preferences are less likely to look for other job opportunities.
Flexible Work Benefits: A No-Brainer for Retention
While more than three-fourths of employees are satisfied with traditional benefits such as paid time off and health insurance, survey respondents were the most dissatisfied with the benefits that promote a healthier work-life balance. These include remote work policies and schedule flexibility, as well as childcare benefits and parental leave policies.
Most employees are not looking for drastic changes in their work arrangements. Even small changes in remote policies and more flexible work schedules can make a difference. Allowing one day of working from home per week, implementing half-day Fridays, reducing summer hours and allowing employees some say in their schedules are all examples of flexible work arrangements that provide employees some autonomy in achieving a work-life balance that will improve productivity and retention.
A more flexible work environment could be an effective strategy for institutions looking to retain their top talent, particularly those under the age of 45, who are significantly more likely not only to look for other employment in the coming year, but also more likely to value flexible and remote work as a benefit. Flexible work arrangements could also support efforts to recruit and retain candidates who are often underrepresented: the survey found that women and people of color are more likely to prefer remote or hybrid options.
Explore CUPA-HR Resources. Discover best practices and policy models for navigating the challenges that come with added flexibility, including managing a multi-state workforce:
Remember the Two-Thirds Rule. In reevaluating flexible and remote work policies, remember: Two-thirds of higher ed employees believe most of their duties can be performed remotely and two-thirds would prefer hybrid or remote work arrangements, yet two-thirds are compelled to work mostly or completely on-site.