Job titles, and the names given to organisational roles, are important for the meaning that individuals derive from their work and their engagement with their work.
Yet within many UK universities, and especially the post-92s, the trend is towards new job titles with potentially negative connotations for the job holders in terms of the meaning of their work and their commitment to it and to their institution.
Such universities have been moving away from the conventional “lecturer” titles, adopting the US system of titles. US institutions typically designate their junior (un-tenured) academics as Assistant Professors, with an intermediate grade of Associate Professor and then a full Professor grade. Within the US system, most long serving and effective staff can expect to progress to full Professor by mid-career.
Yet, in this new UK system, only around 15-20 per cent of academics are (and likely ever to be) full Professors and many academics will spend their entire careers as Assistant Professors or Associate Professors, retiring with one of these diminutive job titles.
The previous, additive, job titles of Lecturer to Senior Lecturer and then to Principal Lecturer or Reader had meaning outside the university and, crucially, had meaning for the post-holders, giving a sense of achievement and pride as they progressed. Retiring as a Senior or Principal Lecturer was deemed more than acceptable.
Status and self-esteem
It is not hard to imagine the impact that the changes in job titles is having upon mid and late-career academics who may have little chance of gaining promotion to full professor, perhaps because quite simply they draw the line at working “just” 60 hours a week, 50 weeks a year. The impact on status and self-esteem is immense. Imagine explaining to your grandkids that you are, in essence, an assistant to a professor. As an Associate Professor, and particularly in a vocational discipline, one of the authors is often asked, “I can understand you wanting to work part-time for a university, but what’s your main job?” Associate, affiliate, adjunct – these names are pretty much the same thing to outsiders.
Managerially, though, the change from designating academics as Senior Lecturer to Assistant Professors and from Principal Lecturers to Associate Professors is genius. These diminutive job titles confer inferiority – but with the promise that if you keep your nose to the grindstone and keep up the 60+ hour weeks, 50 weeks a year, you might be in with a chance of a decent job title, as a professor. What a fantastic, and completely friction-free, way of turning the performative screw.
The UK university sector is not alone and other public sector organisations have similarly got into a meaning muddle from the naming of their jobs. For example, in the British civil service, a key middle management role is labelled “Grade B2+”, whereas a relatively junior operational role is designated a rather grand sounding “Executive Officer”. And just last autumn, the NHS acknowledged that names do matter, abandoning the designation of “junior” doctor which was used to encompass all medics that sit within the grades below what is known as “consultant”, and which their union described as “misleading and demeaning” – it’s been replaced with “resident” doctor.
Meaningful work
A name gives meaning to workers. It gives status, prestige, and identity. While those organisations such as universities who fail to realise the importance of job titles may be able to turn the screw in the short-term, extracting ever more work from their junior-sounding Assistant and Associate Professors, they will in the longer-term, for sure, have an ever more demoralised and demotivated workforce for whom the job has little meaning other than the pay.
And, since pay for university academics in the UK has been so badly eroded in recent decades, job title conventions are a self-inflicted injury – one that risks academics’ engagement and wellbeing and, ultimately, their institutions’ performance.
This audiobook narrated by Kate Harper examines how the financial pressures of paying for college affect the lives and well-being of middle-class families The struggle to pay for college is one of the defining features of middle-class life in America today. At kitchen tables all across the country, parents agonize over whether to burden their children with loans or to sacrifice their own financial security by taking out a second mortgage or draining their retirement savings. Indebted takes readers into the homes of middle-class families throughout the nation to reveal the hidden consequences of student debt and the ways that financing college has transformed family life.
Caitlin Zaloom gained the confidence of numerous parents and their college-age children, who talked candidly with her about stressful and intensely personal financial matters that are usually kept private. In this remarkable book, Zaloom describes the profound moral conflicts for parents as they try to honor what they see as their highest parental duty—providing their children with opportunity—and shows how parents and students alike are forced to take on enormous debts and gamble on an investment that might not pay off.
What emerges is a troubling portrait of an American middle class fettered by the “student finance complex”—the bewildering labyrinth of government-sponsored institutions, profit-seeking firms, and university offices that collect information on household earnings and assets, assess family needs, and decide who is eligible for aid and who is not. Superbly written and unflinchingly honest, Indebted breaks through the culture of silence surrounding the student debt crisis, revealing the unspoken costs of sending our kids to college.
What is the state of free speech on college campuses? More students now support shouting down speakers. Several institutions faced externalpressure from government entities to punish constitutionally protected speech. And the number of “red light” institutions — those with policies that significantly restrict free speech — rose for the second year in a row, reversing a 15-year trend of decreasing percentages of red light schools, according to FIRE research.
These are just a few of the concerns shared by FIRE’s Lead Counsel for Government Affairs Tyler Coward, who joined lawmakers, alumni groups, students, and stakeholders last week in a discussion on the importance of improving freedom of expression on campus.
Rep. Greg Murphy led the roundtable, along with Rep. Virginia Foxx, Chairwoman of the House Committee on Education and the Workforce, and Rep. Burgess Owens.
But the picture on campus isn’t all bad news. Tyler highlighted some positive developments, including: an increase in “green light” institutions — schools with written policies that do not seriously threaten student expression — along with commitments to institutional neutrality, and “more and more institutions are voluntarily abandoning their requirements that faculty and students submit so-called DEI statements for admission, application, promotion, and tenure review.”
Tyler noted the passage of the Respecting the First Amendment on Campus Act in the House. The bill requires public institutions of higher education to “ensure their free speech policies align with Supreme Court precedent that protects students’ rights — regardless of their ideology or viewpoint.” Furthermore, crucial Title IX litigation has resulted in the Biden rules being enjoined in 26 states due to concerns over due process and free speech.
Lastly, Tyler highlighted areas of concern drawn from FIRE’s surveys of students and faculty on campus, including the impact of student encampment protests on free expression on college campuses.
WATCH VIDEO: FIRE Lead Counsel for Government Affairs Tyler Coward delivers remarks at Rep. Greg Murphy’s 4th Annual Campus Free Speech Roundtable on Dec. 11, 2024.
Students across the political spectrum are facing backlash or threats of censorship for voicing their opinions. Jasmyn Jordan, an undergraduate student at University of Iowa and the National Chairwoman of Young Americans for Freedom, shared personal experiences of censorship YAF members have faced on campus due to their political beliefs. Gabby Dankanich, also from YAF, provided additional examples, including the Clovis Community College case. At Clovis, the administration ordered the removal of flyers YAF students posted citing a policy against “inappropriate or offensive language or themes.” (FIRE helped secure a permanent injunction on behalf of the students. Additionally, Clovis’s community college district will have to pay the students a total of $330,000 in damages and attorney’s fees.)
VICTORY: California college that censored conservative students must pay $330,000, adopt new speech-protective policy, and train staff
Press Release
Federal court orders Clovis and three other community colleges to stop discriminating against student-group speech based on viewpoint.
Conservative students aren’t the only ones facing challenges in expressing their ideas on campus. Kenny Xu, executive director of Davidsonians for Free Speech and Discourse, emphasized that free speech is not a partisan issue. Citing FIRE data, he noted that 70% of students feel at least somewhat uncomfortable publicly disagreeing with a professor in class. “I can assure you that 70% of students are not conservatives,” he remarked. Kyle Beltramini from the American Council of Trustees and Alumni, reinforced this point. Sharing findings from ACTA’s own research, he emphasized that “this is not a problem faced by a single group of students but rather an experience shared across the ideological spectrum.”
The roundtable identified faculty as a critical part of the solution, though they acknowledged faculty members often fear speaking up. FIRE’s recent survey of over 6,000 faculty across 55 U.S. colleges and universities supports this claim. According to the results, “35% of faculty say they recently toned down their writing for fear of controversy, compared to 9% who said the same during the McCarthy era.”
While this data underscores the challenges faculty face, it also points to a broader issue within higher education. Institutions, Tyler said, have a dual obligation to “ensure that speech rights are protected” and that “students remain free from harassment based on a protected characteristic.” Institutions did not get this balance right this year. But, ACTA’s Kyle Beltramini noted the positive development that these longstanding issues have finally migrated into the public consciousness: “By and large, policy makers and the public have been unaware of the vast censorial machines that colleges and universities have been building up to police free speech, enforce censorship, and maintain ideological hegemony in the name of protecting and supporting their students,” he stated. This moment presents an opportunity to provide constructive feedback to institutions to hopefully address these shortcomings.
FIRE thanks Rep. Murphy for the opportunity to contribute to this vital conversation. We remain committed to working with legislators who share our dedication to fostering a society that values free inquiry and expression.
Alumni are also speaking up, and at the roundtable they shared their perspectives on promoting free speech and intellectual diversity in higher education. Among them was Tom Neale, UVA alumnus and president of The Jefferson Council and the Alumni Free Speech Alliance, who highlighted the importance of connecting with alumni from institutions like Cornell, Davidson, and Princeton, since they’re “all united by their common goal to restore true intellectual diversity and civil discourse in American higher-ed.”
Other participants at the roundtable included members of Speech First, and Princetonians for Free Speech.
So what can be done? Participants proposed several solutions, including passing legislation that prohibits the use of political litmus tests in college admissions, hiring, and promotion decisions. They also suggested integrating First Amendment education into student orientation programs to ensure incoming undergraduates understand their rights and responsibilities on campus. Additionally, they emphasized the importance of developing programs that teach students how to engage constructively in disagreements — rather than resorting to censorship — and to promote curiosity, dissent, talking across lines of difference, and an overall culture of free expression on campus.
FIRE thanks Rep. Murphy for the opportunity to contribute to this vital conversation. We remain committed to working with legislators who share our dedication to fostering a society that values free inquiry and expression.
You can watch the roundtable on Rep. Murphy’s YouTube channel.
A full picture of neurodiversity in the workplace includes understanding how gender shapes employees’ experiences of neurodevelopmental disorders. Although they’re diagnosed at roughly the same rates as men, women with ADHD may be overlooked in conversations about attention-deficit/hyperactivity disorder. Until fairly recently, ADHD was seen as primarily affecting children, with the typical view of someone with the disorder as a restless or hyperactive boy.
Awareness about how ADHD can manifest differently in women — and how gender stereotypes play a significant role in diagnosis and treatment — can help foster a culture that uplifts neurodiversity and the skills that neurodiverse employees can offer an organization. Employees with ADHD bring unique strengths and perspective to their work, such as creativity, courage and hyperfocus.
Here’s what HR needs to know about ADHD and how it can be different for women.
Misconceptions About ADHD
Rather than a set of behaviors, ADHD is a neurodevelopmental condition affecting about 2% to 5% of adults, and falls under the same broad umbrella as autism spectrum disorder and dyslexia. A stereotypical picture of someone with ADHD is “a boy who can’t sit still and is disruptive in class,” according to Dr. Deepti Anbarasan, a clinical associate professor of psychiatry at New York University.
Women who receive ADHD diagnoses in adulthood may have struggled with inattention and executive functioning for much of their lives. Because girls and women with ADHD often present as inattentive rather than hyperactive, and because women often develop coping skills that mask ADHD, women often receive late-in-life diagnoses. By the time women reach adulthood, however, the rates of diagnosis are close to those seen in men.
ADHD in women often presents as challenges with executive functioning, which can include difficulties with attention and focus, as well as emotional dysregulation, trouble with finishing tasks or juggling multiple tasks, and absentmindedness. Women with ADHD might also suffer from anxiety and depression, and even suicide attempts and self-harm. Some people with ADHD compensate by working extra hours during their personal time to keep up with their day-to-day work, causing added stress.
A Strengths-Based Approach
Though ADHD can pose real challenges at work, a strengths-based approach highlights the advantages that employees with ADHD bring to their jobs. In a recent study, for example, 50 adults with ADHD identified the positive aspects of living with the condition, including energy and drive, a high degree of creativity, an ability to hyperfocus, and traits such as resilience, curiosity, and empathy. The same study emphasizes that experiencing ADHD as challenging or beneficial depends on the context and sociocultural environment that a person is in.
HR as a Leader in Neurodiversity
Given how much context and sociocultural environment matters, creating a campus climate that supports neurodiversity is critical. HR can champion neurodiversity through awareness and well-being programs. Because ADHD often occurs alongside depression and anxiety, a holistic approach to well-being is recommended. (Learn how the University of Texas Health Science Center at San Antonio gained traction with their mental health awareness campaign.)
HR can also advocate for accommodations to support neurodivergent employees. For example, task separation is a common management strategy to help employees set their work priorities. In emails and written communication this might look like establishing clear parameters, breaking requests down into bulleted lists, and clearly spelling out instructions like “two-minute ask” or “response requested.” (For many more suggestions on how to uplift neurodiversity on campus, including practical tips for accommodations, read Neurodiversity in the Higher Ed Workplace.)
There’s a business case to be made for a robust attention to neurodiversity: increased retention and productivity, reduced absenteeism, and developing employees’ strengths. Supporting neurodiversity also builds an appealing workplace culture, one that signals to employees that their whole person is valued.
Given the number of employees who successfully executed their work remotely at the height of the pandemic, it may come as no surprise that a substantial gap exists between the work arrangements that higher ed employees want and what institutions offer. According to the new CUPA-HR 2023 Higher Education Employee Retention Survey, although two-thirds of employees state that most of their duties could be performed remotely and two-thirds would prefer hybrid or remote work arrangements, two-thirds of employees are working completely or mostly on-site.
Inflexibility in work arrangements could be costly to institutions and contribute to ongoing turnover in higher ed. Flexible work is a significant predictor of employee retention: Employees who have flexible work arrangements that better align with their preferences are less likely to look for other job opportunities.
Flexible Work Benefits: A No-Brainer for Retention
While more than three-fourths of employees are satisfied with traditional benefits such as paid time off and health insurance, survey respondents were the most dissatisfied with the benefits that promote a healthier work-life balance. These include remote work policies and schedule flexibility, as well as childcare benefits and parental leave policies.
Most employees are not looking for drastic changes in their work arrangements. Even small changes in remote policies and more flexible work schedules can make a difference. Allowing one day of working from home per week, implementing half-day Fridays, reducing summer hours and allowing employees some say in their schedules are all examples of flexible work arrangements that provide employees some autonomy in achieving a work-life balance that will improve productivity and retention.
A more flexible work environment could be an effective strategy for institutions looking to retain their top talent, particularly those under the age of 45, who are significantly more likely not only to look for other employment in the coming year, but also more likely to value flexible and remote work as a benefit. Flexible work arrangements could also support efforts to recruit and retain candidates who are often underrepresented: the survey found that women and people of color are more likely to prefer remote or hybrid options.
Explore CUPA-HR Resources. Discover best practices and policy models for navigating the challenges that come with added flexibility, including managing a multi-state workforce:
Remember the Two-Thirds Rule. In reevaluating flexible and remote work policies, remember: Two-thirds of higher ed employees believe most of their duties can be performed remotely and two-thirds would prefer hybrid or remote work arrangements, yet two-thirds are compelled to work mostly or completely on-site.
A friend recently asked me for advice on a problem he was wrestling with related to an issue he was having with a 1EdTech interoperability standard. It was the same old problem of a standard not quite getting true interoperability because people implement it differently. I suggested he try using a generative AI tool to fix his problem. (I’ll explain how shortly.)
I don’t know if my idea will work yet—he promised to let me know once he tries it—but the idea got me thinking. Generative AI probably will change EdTech integration, interoperability, and the impact that interoperability standards can have on learning design. These changes, in turn, impact the roles of developers, standards bodies, and learning designers.
In this post, I’ll provide a series of increasingly ambitious use cases related to the EdTech interoperability work of 1EdTech (formerly known as IMS Global). In each case, I’ll explore how generative could impact similar work going forward, how it changes the purpose of interoperability standards-making, and how it impacts the jobs and skills of various people whose work is touched by the standards in one way or another.
Generative AI as duct tape: fixing QTI
1EdTech’s Question Test Interoperability (QTI) standard is one of its oldest standards that’s still widely used. The earliest version on the 1EdTech website dates back to 2002, while the most recent version was released in 2022. You can guess from the name what it’s supposed to do. If you have a test, or a test question bank, in one LMS, QTI is supposed to let you migrate it into another without copying and pasting. It’s an import/export standard.
It never worked well. Everybody has their own interpretation of the standard, which means that importing somebody else’s QTI export is never seamless. When speaking recently about QTI to a friend at an LMS company, I commented that it only works about 80% of the time. My friend replied, “I think you’re being generous. It probably only works about 40% of the time.” 1EdTech has learned many lessons about achieving consistent interoperability in the decades since QTI was created. But it’s hard to fix a complex legacy standard like this one.
Meanwhile, the friend I mentioned at the top of the post asked me recently about practical advice for dealing with this state of affairs. His organization imports a lot of QTI question banks from multiple sources. So his team spends a lot of time debugging those imports. Is there an easier way?
I thought about it.
“Your developers probably have many examples that they’ve fixed by hand by now. They know the patterns. Take a handful of before and after examples. Embed them into a prompt in a generative AI that’s good at software code, like Hugging Chat. [As I was drafting this post, OpenAI announced that ChatGPT now has a code interpreter.] “Then give the generative AI a novel input and see if it produces the correct output.”
Generative AI are good at pattern matching. The differences in QTI implementations are likely to have patterns to them that an LLM can detect, even if those differences change over time (because, for example, one vendor’s QTI implementation changed over time).
In fact, pattern matching on this scale could work very well with a smaller generative AI model. We’re used to talking about ChatGPT, Google Bard, and other big-name systems that have between half a billion and a billion transformers. Think of transformers as computing legos. One major reason that ChatGPT is so impressive is that it uses a lot of computing legos. Which makes it expensive, slow, and computationally intensive. But if your goal is to match patterns against a set of relatively well-structured set of texts such as QTI files, you could probably train a much smaller model than ChatGPT to reliably translate between implementations for you. The smallest models, like Vicuña LLM, are only 7 billion transformers. That may sound like a lot but it’s small enough to run on a personal computer (or possibly even a mobile phone). Think about it this way: The QTI task we’re trying to solve for is roughly equivalent in complexity to the spell-checking and one-word type-ahead functions that you have on your phone today. A generative AI model for fixing QTI imports could probably be trained for a few hundred dollars and run for pennies.
This use case has some other desirable characteristics. First, it doesn’t have to work at high volume in real time. It can be a batch process. Throw the dirty dishes in the dishwasher, turn it on, and take out the clean dishes when the machine shuts off. Second, the task has no significant security risks and wouldn’t expose any personally identifiable information. Third, nothing terrible happens if the thing gets a conversion wrong every now and then. Maybe the organization would have to fix 5% of the conversions rather than 100%. And overall, it should be relatively cheap. Maybe not as cheap as running an old-fashioned deterministic program that’s optimized for efficiency. But maybe cheap enough to be worth it. Particularly if the organization has to keep adding new and different QTI implementation imports. It might be easier and faster to adjust the model with fine-tuning or prompting than it would be to revise a set of if/then statements in a traditional program.
How would the need for skilled programmers change? Somebody would still need to understand how the QTI mappings work well enough to keep the generative AI humming along. And somebody would have to know how to take care of the AI itself (although that process is getting easier every day, especially for this kind of a use case). The repetitive work they are doing now would be replaced by the software over time, freeing up the human brains for other things that human brains are particularly good at. In other words, you can’t get rid of your programmer but you can have that person engaging in more challenging, high-value work than import bug whack-a-mole.
How does it change the standards-making process? In the short term, I’d argue that 1EdTech should absolutely try to build an open-source generative AI of the type I’m describing rather than trying to fix QTI, which is a task they’ve not succeeded in doing over 20 years. This strikes me as a far shorter path to achieving the original purpose for which QTI was intended, which is to move question banks from one system to another.
This conclusion, in turn, leads to a larger question: Do we need interoperability standards bodies in the age of AI?
My answer is a resounding “yes.”
Going a step further: software integration
QTI provides data portability but not integration. It’s an import/export format. The fact that Google Docs can open up a document exported from Microsoft Word doesn’t mean that the two programs are integrated in any meaningful way.
So let’s consider Learning Tool Interoperability (LTI). LTI was quietly revolutionary. Before it existed, any company building a specialized educational tool would have to write separate integrations for every LMS.
The nature of education is that it’s filled with what folks in the software industry would disparagingly call “point solutions.” If you’re teaching students how to program in python, you need a python programming environment simulator. But that tool won’t help a chemistry professor who really needs virtual labs and molecular modeling tools. And none of these tools are helpful for somebody teaching English composition. There simply isn’t a single generic learning environment that will work well for teaching all subjects. None of these tools will ever sell enough to make anybody rich.
Therefore, the companies that make these necessary niche teaching tools will tend to be small. In the early days of the LMS, they couldn’t afford to write a separate integration for every LMS. Which meant that not many specialized learning tools were created. As small as these companies’ target markets already were, many of them couldn’t afford to limit themselves to the subset of, say, chemistry professors whose universities happened to use Blackboard. It didn’t make economic sense.
LTI changed all that. Any learning tool provider could write integration once and have their product work with every LMS. Today, 1EdTech lists 240 products that are officially certified as supporting LTI interoperability standard. Many more support the standard but are not certified.
Would LTI have been created in a world in which generative AI existed? Maybe not. The most straightforward analogy is Zapier, which connects different software systems via their APIs. ChatGPT and its ilk could act as instant Zapier. A programmer using generative AI could use the API documentation of both systems, ask the generative AI to write integration to perform a particular purpose, and then ask the same AI for help with any debugging.
Again, notice that one still needs a programmer. Somebody needs to be able to read the APIs, understand the goals, think about the trade-offs, give the AI clear instructions, and check the finished program. The engineering skills are still necessary. But the work of actually writing the code is greatly reduced. Maybe by enough that generative AI would have made LTI unnecessary.
But probably not. LTI connections pass sensitive student identity and grade information back and forth. It has to be secure and reliable. The IT department has legal obligations, not to mention user expectations, that a well-tested standard helps alleviate (though not eliminate). On top of that, it’s just a bad idea to have spread bits of glue code here, there, and everywhere, regardless of whether a human or a machine writes it. Somebody—an architect—needs to look at the big picture. They need to think about maintainability, performance, security, data management, and a host of other concerns. There is value in having a single integration standard that has been widely vetted and follows a pattern of practices that IT managers can handle the same way across a wide range of product integrations.
At some point, if a software integration fails to pass student grades to the registrar or leaks personal data, a human is responsible. We’re not close to the point where we can turn over ethical or even intellectual responsibility for those challenges to a machine. If we’re not careful, generative AI will simply write spaghetti code much faster the old days.
The social element of knowledge work
More broadly, there are two major value components to the technical interoperability standards process. The first is obvious: technical interoperability. It’s the software. The second is where the deeper value lies. It’s in the conversation that leads to the software. I’ve participated in a 1EdTech specification working group. When the process went well, we learned from each other. Each person at that table brought a different set of experiences to an unsolved problem. In my case, the specification we were working on sent grade rosters from the SIS to the LMS and final grades back from the LMS to the SIS. It sounds simple. It isn’t. We each brought different experiences and lessons learned regarding many aspects of the problem, from how names are represented in different cultures to how SIS and LMS users think differently in ways that impact interoperability. In the short term, a standard is always a compromise. Each creator of a software system has to make adjustments that accommodate the many ways in which others thought differently when they built their own systems. But if the process works right, everybody goes home thinking a little differently about how their systems could be built better for everybody’s benefit. In the longer term, the systems we continue to build over time reflect the lessons we learn from each other.
Generative AI could make software integration easier. But without the conversation of the standards-making process, we would lose the opportunity to learn from each other. And if AI can reduce the time and cost of the former, then maybe participants in the standards-making effort will spend more time and energy on the latter. The process would have to be rejiggered somewhat. But at least in some cases, participants wouldn’t have to wait until the standard was finalized before they started working on implementing it. When the cost of implementation is low enough and the speed is fast enough, the process can become more of an iterative hackathon. Participants can build working prototypes more quickly. They would still have to go back to their respective organizations and do the hard work of thinking through the implications, finding problems or trade-offs and, eventually, hardening the code. But at least in some cases, parts of the standards-making process could be more fluid and rapidly iterative than they have been. We could learn from each other faster.
This same principle could apply inside any organization or partnership in which different groups are building different software components that need to work together. Actual knowledge of the code will still be important to check and improve the work of the AI in some cases and write code in others. Generative AI is not ready to replace high-quality engineers yet. But even as it improves, humans will still be needed.
Anthopologist John Seely Brown famously traced the drop in Xerox copier repair quality to a change in its lunch schedule for their repair technicians. It turns out that technicians learn a lot from solving real problems in the field and then sharing war stories with each other. When the company changed the schedule so that technicians had less time together, repair effectiveness dropped noticeably. I don’t know if a software program was used to optimize the scheduling but one could easily imagine that being the case. Algorithms are good at concrete problems like optimizing complex schedules. On the other hand, they have no visibility into what happens at lunch or around the coffee pot. Nobody writes those stories down. They can’t be ingested and processed by a large language model. Nor can they be put together in novel ways by quirky human minds to come up with new insights.
That’s true in the craft of copier repair and definitely true in the craft of software engineering. I can tell you from direct experience that interoperability standards-making is much the same. We couldn’t solve the seemingly simple problem of getting the SIS to talk to the LMS until we realized that registrars and academics think differently about what a “class” or a “course” is. We figured that out by talking with each other and with our customers.
At its heart, standards-making is a social process. It’s a group of people who have been working separately on solving similar problems coming together to develop a common solution. They do this because they’ve decided that the cost/benefit ratio of working together is better than the ratio they’ve achieved when working separately. AI lowers the costs of some work. But it doesn’t yet provide an alternative to that social interaction. If anything, it potentially lowers some of the costs of collaboration by making experimentation and iteration cheaper—if and only if the standards-making participants embrace and deliberately experiment with that change.
That’s especially true the more 1EdTech tries to have a direct role in what it refers to as “learning impact.”
The knowledge that’s not reflected in our words
In 2019, I was invited to give a talk at a 1EdTech summit, which I published a version of under the title “Pedagogical Intent and Designing for Inquiry.” Generative AI was nowhere on the scene at the time. But machine learning was. At the same time, long-running disappointment and disillusionment with learning analytics—analytics that actually measure students’ progress as they are learning—was palpable.
I opened my talk by speculating about how machine learning could have helped with SIS/LMS integration, much as I speculated earlier in the post about how generative AI might help with QTI:
Now, today, we would have a different possible way of solving that particular interoperability problem than the one we came up with over a decade ago. We could take a large data set of roster information exported from the SIS, both before and after the IT professionals massaged it for import into the LMS, and aim a machine learning algorithm at it. We then could use that algorithm as a translator. Could we solve such an interoperability problem this way? I think that we probably could. I would have been a weaker product manager had we done it that way, because I wouldn’t have gone through the learning experience that resulted from the conversations we had to develop the specification. As a general principle, I think we need to be wary of machine learning applications in which the machines are the only ones doing the learning. That said, we could have probably solved such a problem this way and might have been able to do it in a lot less time than it took for the humans to work it out.
I will argue that today’s EdTech interoperability challenges are different. That if we want to design interoperability for the purposes of insight into the teaching and learning process, then we cannot simply use clever algorithms to magically draw insights from the data, like a dehumidifier extracting water from thin air. Because the water isn’t there to be extracted. The insights we seek will not be anywhere in the data unless we make a conscious effort to put them there through design of our applications. In order to get real teaching and learning insights, we need to understand the intent of the students. And in order to understand that, we need insight into the learning design. We need to understand pedagogical intent.
That new need, in turn, will require new approaches in interoperability standards-making. As hard as the challenges of the last decade have been, the challenges of the next one are much harder. They will require different people at the table having different conversations.
The core problem is that the key element for interpreting both student progress and the effectiveness of digital learning experiences—pedagogical intent—is not encoded in most systems. No matter how big your data set is, it doesn’t help you if the data you need aren’t in it. For this reason, I argued, fancy machine learning tricks aren’t going to give us shortcuts.
That problem is the same, and perhaps even worse in some ways, with generative AI. All ChatGPT knows is what it’s read on the internet. And while it’s made progress in specific areas at reading between the lines, the fact is that important knowledge, including knowledge about applied learning design, simply is extremely scarce in the data it can access and even in the data living in our learning systems that it can’t access.
The point of my talk was that interoperability standards could help by supplying critical metadata—context—if only the standards makers set that as their purpose, rather than simply making sure that quiz questions end up in the right place when migrating from one LMS to another.
I chose to open the talk by highlighting the ambiguity of language that enables us to make art. I chose this passage from Shakespeare’s final masterpiece, The Tempest:
O wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world That has such people in’t!
William Shakespeare, The Tempest
It’s only four lines. And yet it is packed with double entendres and the ambiguity that gives actors room to make art:
Here’s the scene: Miranda, the speaker, is a young woman who has lived her entire life on an island with nobody but her father and a strange creature who she may think of as a brother, a friend, or a pet. One day, a ship becomes grounded on the shore of the island. And out of it comes, literally, a handsome prince, followed by a collection of strange (and presumably virile) sailors. It is this sight that prompts Miranda’s exclamation.
As with much of Shakespeare, there are multiple possible interpretations of her words, at least one of which is off-color. Miranda could be commenting on the hunka hunka manhood walking toward her.
“How beauteous mankind is!”
Or. She could be commenting on how her entire world has just shifted on its axis. Until that moment, she knew of only two other people in all of existence, each of who she had known her entire life and with each of whom she had a relationship that she understood so well that she took it for granted. Suddenly, there was literally a whole world of possible people and possible relationships that she had never considered before that moment.
“O brave new world / That has such people in’t”
So what is on Miranda’s mind when she speaks these lines? Is it lust? Wonder? Some combination of the two? Something else?
The text alone cannot tell us. The meaning is underdetermined by the data. Only with the metadata supplied by the actor (or the reader) can we arrive at a useful interpretation. That generative ambiguity is one of the aspects of Shakespeare’s work that makes it art.
But Miranda is a fictional character. There is no fact of the matter about what she is thinking. When we are trying to understand the mental state of a real-life human learner, then making up our own answer because the data are not dispositive is not OK. As educators, we have a moral responsibility to understand a real-life Miranda having a real-life learning experience so that we can support her on her journey.
Generative AI like ChatGPT can answer questions about different ways to interpret Miranda’s lines in the play because humans have written about this question and made their answers available on the internet. If you give the chatbot an unpublished piece of poetry and ask it for an interpretation, its answers are not likely to be reliably sophisticated. While larger models are getting better at reading between the lines—a topic for a future blog post—they are not remotely as good as humans are at this yet.
Making the implicit explicit
This limitation of language interpretation is central to the challenge of applying generative AI to learning design. ChatGPT has reignited fantasies about robot tutors in the sky. Unfortunately, we’re not giving the AI the critical information it needs to design effective learning experiences:
The challenge that we face as educators is that learning, which happens completely inside the heads of the learners, is invisible. We can not observe it directly. Accordingly, there are no direct constructs that represent it in the data. This isn’t a data science problem. It’s an education problem. The learning that is or isn’t happening in the students’ heads is invisible even in a face-to-face classroom. And the indirect traces we see of it are often highly ambiguous. Did the student correctly solve the physics problem because she understands the forces involved? Because she memorized a formula and recognized a situation in which it should be applied? Because she guessed right? The instructor can’t know the answer to this question unless she has designed a series of assessments that can disambiguate the student’s internal mental state.
In turn, if we want to find traces of the student’s learning (or lack thereof) in the data, we must understand the instructor’s pedagogical intent that motivates her learning design. What competency is the assessment question that the student answered incorrectly intended to assess? Is the question intended to be a formative assessment? Or summative? If it’s formative, is it a pre-test, where the instructor is trying to discover what the student knows before the lesson begins? Is it a check for understanding? A learn-by-doing exercise? Or maybe something that’s a little more complex to define because it’s embedded in a simulation? The answers to these questions can radically change the meaning we assign to a student’s incorrect answer to the assessment question. We can’t fully and confidently interpret what her answer means in terms of her learning progress without understanding the pedagogical intent of the assessment design.
But it’s very easy to pretend that we understand what the students’ answers mean. I could have chosen any one of many Shakespeare quotes to open this section, but the one I picked happens to be the very one from which Aldous Huxley derived the title of his dystopian novel Brave New World. In that story, intent was flattened through drugs, peer pressure, and conditioning. It was reduced to a small set of possible reactions that were useful in running the machine of society. Miranda’s words appear in the book in a bitterly ironic fashion from the mouth of the character John, a “savage” who has grown up outside of societal conditioning.
We can easily develop “analytics” that tell us whether students consistently answer assessment questions correctly. And we can pretend that “correct answer analytics” are equivalent to “learning analytics.” But they are not. If our educational technology is going to enable rich and authentic vision of learning rather than a dystopian reductivist parody of it, then our learning analytics must capture the nuances of pedagogical intent rather than flattening it.
A professor knows that her students tend to develop a common misconception that causes them to make practical mistakes when applying their knowledge. She very carefully crafts her course to address this misconception. She writes the content to address it. In her tests, she provides wrong answer choices—a.k.a. “distractors”—that students would choose if they had the misconception. She can tell, both individually and collectively, whether her students are getting stuck on the misconception by how often they pick the particular distractor that fits with their mistaken understanding. Then she writes feedback that the students see when they choose that particular wrong answer. She crafts it so that it doesn’t give away the correct answer but does encourage students to rethink their mistakes.
Imagine if all this information were encoded in the software. Their hierarchy would look something like this:
Here is learning objective (or competency) 1
Here is content about learning objective 1
Here is assessment question A about learning objective 1.
Here is distractor c in assessment question A. Distractor c addresses misconception alpha.
Here is feedback to distractor c. It is written specifically to help students rethink misconception alpha without giving away the answer to question A. This is critical because if we simply tell the student the answer to question A then we can’t get good data about the likelihood that the student has mastered learning objective 1.
All of that information is in the learning designer’s head and, somehow, implicitly embedded in the content in subtle details of the writing. But good luck teasing it out by just reading the textbook if you aren’t an experienced teacher of the subject yourself.
What if these relationships were explicit in the digital text? For individual students, we could tell which ones were getting stuck on a specific misconception. For whole courses, we could identify the spots that are causing significant numbers of students to get stuck on a learning objective or competency. And if that particular sticking point causes students to be more likely to fail either that course or a later course that relies on a correct understanding of a concept, then we could help more students persist, pass, stay in school, and graduate.
That’s how learning analytics can work if learning designers (or learning engineers) have tools that explicitly encode pedagogical intent into a machine-readable format. They can use machine learning to help them identify and smooth over tough spots where students tend to get stuck and fall behind. They can find the clues that help them identify hidden sticking points and adjust the learning experience to help students navigate those rough spots. We know this can work because, as I wrote about in 2012, Carnegie Mellon University (among others) has been refining this science and craft for decades.
Generative AI adds an interesting twist. The challenge with all this encoding of pedagogical intent is that it’s labor-intensive. Learning designers often don’t have time to focus on the work required to identify and improve small but high-value changes because they’re too busy getting the basics done. But generative AI that creates learning experiences modeled after thepedagogical metadata in the educational content it is trained on could provide a leg up. It could substantially speed up the work of writing the first-draft content so that designers can focus on the high-value improvements that humans are still better at than machines.
Realistically, for example, generative AI is not likely to know particular common misconceptions that block students from mastering a competency. Or how to probe for and remediate those misconceptions. But if were trained on the right models, it could generate good first-draft content through a standards-based metadata format that could be imported into a learning platform. The format would have explicit placeholders for those critical probes and hints. Human experts. supported by machine learning. could focus their time on finding and remediating these sticking points in the learning process. Their improvements would be encoded with metadata, providing the AI with better examples of what effective educational content looks like. Which would enable the AI to generate better first-draft content.
1EdTech could help bring about such a world through standards-making. But they’d have to think about the purpose of interoperability differently, bring different people to the table, and run a different kind of process.
O brave new world that has such skilled people in’t
I spoke recently to the head of product development for an AI-related infrastructure company. His product could enable me to eliminate hallucinations while maintaining references and links to original source materials, both of which would be important in generating educational content. I explained a more elaborate version of the basic idea in the previous section of this post.
“That’s a great idea,” he said. “I can think of a huge number of applications. My last job was at Google. The training was terrible.”
Google. The company that’s promoting the heck out of their free AI classes. The one that’s going to “disrupt the college degree” with their certificate programs. The one that everybody holds up as leading the way past traditional education and toward skills-based education.
Their training is “terrible.”
Yes. Of course it is. Because everybody’s training is terrible. Their learning designers have the same problem I described academic learning designers as having in the previous section. Too much to develop, too little time. Only much, much worse. Because they have far fewer course design experts (if you count faculty as course design experts). Those people are the first to get cut. And EdTech in the corporate space is generally even worse than academic EdTech. Worst of all? Nobody knows what anybody knows or what anybody needs to know.
Academia, including 1EdTech and several other standards bodies, funded by corporate foundations, are pouring incredible amounts of time, energy, and money into building a data pipeline for tracking skills. Skill taxonomies move from repositories to learning environments, where evidence of student mastery is attached to those skills in the form of badges or comprehensive learner records. Which are then sent off to repositories and wallets.
The problem is, pipelines are supposed to connect to endpoints. They move something valuable from the place where it is found to the place where it is needed. Many valuable skills are not well documented if they are documented at all. They appear quickly and change all the time. The field of knowledge management has largely failed to capture this information in a timely and useful way after decades of trying. And “knowledge” management has tended to focus on facts, which are easier to track than skills.
In other words, the biggest challenge that folks interested in job skills face is not an ocean of well-understood skill information that needs to be organized but rather a problem of non-consumption. There isn’t enough real-world, real-time skill information flowing into the pipeline and few people who have real uses for it on the other side. Almost nobody in any company turns to their L&D departments to solve the kinds of skills problems that help people become more productive and advance in their careers. Certainly not at scale.
But the raw materials for solving this problem exist. A CEO for HP once famously noted knows a lot. It just doesn’t know what it knows.
From developing supervisor competencies to transforming HR operations, human resources teams and HR practitioners across the country are doing great work every day.
CUPA-HR’s regional Higher Education HR Awards program recognizes some of the best and brightest in higher ed HR and honors HR professionals who have given their time and talents to the association.
Here are this year’s regional award recipients:
HR Excellence Award
This award honors transformative HR work in higher education and recognizes a team that has provided HR leadership resulting in significant and ongoing organizational change within its institution.
Office of Human Resources, Towson University (Eastern Region)
Towson University has had a partnership with Humanim, a nonprofit community workforce-development program, for many years. However, the pandemic created challenges that threatened to derail the partnership and the program. TU’s office of human resources, along with other anchor institutions, worked with Humanim to move parts of the program online, including virtual mock interviews, information sessions and panel discussions. Despite turnover created by the pandemic, the TU HR team was determined to maintain its relationship with Humanim and continue to provide employment opportunities to Baltimore residents. As TU’s top provider of quality temporary candidates for the university’s administrative functions, Humanim was also essential to the university during pandemic. For their outstanding work, CUPA-HR has contributed $1,000 to Towson University.
Human Resources, Grand Valley State University (Midwest Region)
In February 2022, Grand Valley State University’s HR team began implementing a total transformation of their operations, shifting from a 60-year-old compliance-driven approach to HR to an HR business partner approach. This change resulted in the creation of a “one-stop shop,” where HR services could be delivered more efficiently and consistently across all campus departments. The team also moved to improve efficiency by merging payroll, HR administration and technology, and benefits into a total rewards unit. And in the fall of 2022, HR established a formal talent management unit to organize and advance talent efforts. With these changes, HR is well positioned to unify and transform the university’s organizational culture. For their outstanding work, CUPA-HR has contributed $1,000 to Grand Valley State University.
Culture Team, Utah Valley University (Western Region)
Recognizing a need for a better leadership experience for supervisors on their campus, Utah Valley University’s culture team set out to create a set of standardized leadership competencies that would help ensure that they were hiring the right people, communicating clear expectations during onboarding, providing leadership resources through training, and allowing supervisors to receive feedback. The Leadership Competency Experience, based on six leadership competencies and the university’s core values, established a standardized method of hiring, onboarding, training and feedback processing intended to cultivate effective leadership at all levels. Two years in, the program has made a significant impact on the quality of supervisors being hired and the training and support they receive, and the number of employee relations cases and volume of turnover due to bad supervision have decreased sharply. In fact, it has been so successful that in July 2022 the team released the Staff Competency Experience. For this impressive achievement, CUPA-HR has contributed $1,000 to Utah Valley University.
Higher Ed HR Rock Star Award
This award recognizes an individual who is serving in the first five years of a higher education HR career who has already made a significant impact.
Miranda Arjona, Rollins College (Southern Region)
From day one, Miranda Arjona, assistant director of human resources at Rollins College, has impressed colleagues with her positive outlook, creativity, willingness to learn and helpful attitude. Whether she’s building relationships within the HR team or leading a service excellence subcommittee, Miranda is focused on strengthening connections and making a difference. When she was asked to temporarily assist in student affairs to help manage contact tracing and consulting during the pandemic, she did so with her typical positivity and commitment to the task. Just as seamlessly, she transitioned back to her talent management role with the same mindset and tenacity. Her commitment to being a relationship-builder has not only served Rollins but also the higher ed HR community. She has been a speaker at two local HR events, and she is currently serving as president-elect of the CUPA-HR Florida Chapter.
Lyndon Huling, University of California-Davis (Western Region)
Lyndon Huling, manager of leadership recruitment, temporary staffing and diversity services at UC Davis, routinely taps his broad intergenerational and cross-cultural campus connections in his work, making him an exceptionally effective leader. His commitment to reimagining HR and recruitment best practices through a DEI lens shows in the strategies he’s developed and the innovative programs he’s been instrumental in establishing. Among other projects, he has co-sponsored and delivered transformative Race Matters workshops that create a safe space to learn and discuss race at work, which he has shared through presentations at regional and national CUPA-HR conferences. Through his work to create and share resources, Lyndon has demonstrated himself to be a passionate, progressive leader in higher ed HR.
Chapter Excellence Awards
These regional awards recognize chapters that are making a significant impact through their commitment to CUPA-HR and to the higher ed HR community. They work to achieve this through financial responsibility, commitment to CUPA-HR chapter guiding principles, cultivation of strong leadership, and development of creative networking and professional development opportunities.
This year’s Chapter Excellence regional recipients are:
The CUPA-HR Michigan Chapter (Midwest Region)
The CUPA-HR Kentucky Chapter (Southern Region)
The CUPA-HR Northern and Central California Chapter (Western Region)
One thing that I have recently become very interested in is – “stay interviews”.
These types of interviews are very beneficial because they determine which factors keep a current employee engaged and which ones do not.
Think about it. Why do you decide to remain at your current job? What would entice you to leave? Perhaps a better offer?
This information is perfect for employers and who wish to attract millennials to their workplace.
Stay interviews are informal conversations
What to ask in a stay interview
Ask what would make your employee leave
How managers can stay accountable
Question 1 – What do you look forward to each day when you commute to work?
Question 2 -What are you learning here, and what do you want to learn?
Question 3 – Why do you stay here?
Question 4 – When is the last time you thought about leaving us, and what prompted it?
Question 5 – What can I do to make your job better for you?
This is especially important for rural workplaces where they struggle to attract and retain employees. It is especially difficult for rural employers. Let’s support them in any possible way we can.
Now, I do not have any direct reports at this time, but I have had a wealth of organizational leadership experiences throughout my 20 years in higher education. As a employee, i would not like to answer these questions. I would suggest that leaders determine which questions are most appropriate for their teams.
We do not want these “stay interviews” to be the first interview on a short journey to an “exit interview”.
In the comment box, let us know which questions you would add and which questions you would delete.
While new technological advancements grace humankind every day, it is astonishing that some long-gone scientists have produced research that is very much relevant today. As new research about the universe comes to light, we see that research done years ago is being proven right. One of these brilliant scientists that have stood the test of time is the great Albert Einstein. Unbelievable as it may be, the studies and theories put forth and born from his exceptional mind are still aiding scientists of today to understand the world better.
Thus, his work holds imperative value and should be studied even today. Maybe that is why it was a great idea for Jeffrey O’Callaghan to write a book about it.
Ever since I picked up “Einstein’s Explanation of the Unexplainable,” I’ve been completely engrossed and enraptured. While I always had a great admiration for Einstein and his work, I never really understood it on a fundamental level. This book helped me do just that. I must give credit where it’s due, of course. The author does a great job of explaining theories that would otherwise go above my head.
The book is filled with the theories and works of Einstein’s life. But… they’re explained in a way that makes it easy for just about anyone to understand. That means you don’t need to be an expert in the field to grasp the concepts. You can just be curious or want to know more.
For example, let’s discuss one of the topics mentioned in the book that captured my attention. In the very first article of the book titled, “Do the Laws of Physics Break Down in a Black Hole?”, he talks about one of the most important theories that have plagued many a scientist back in the day. This, of course, is the theory of general relativity.
Black holes pose a concern because they are enormously large and incredibly remote. Our ability to see their backsides is obstructed, and the signals coming from that side are weak. This makes it challenging to explain and nearly impossible to observe the swirling, extremely hot materials pouring into them (the accretion disc).
When Einstein first explained his theory of general relativity, it was considered extremely outlandish. What is the theory? Gravitational lensing is a phenomenon that amplifies light and causes it to move along a different trajectory than it might otherwise, each of which is caused by the distortion of space and time that large objects like black holes cause.
The English astronomer Arthur Eddington and colleagues made the first recorded observations of this phenomenon during a full solar eclipse in 1919, which propelled Einstein and his untested hypothesis to notoriety. Normally, stars stay in one spot in the night sky, whereas during the eclipse, those that were behind the Sun looked to have moved because the Sun’s gravity altered the path that their light took to reach earth.
In this chapter, he also answers the question posed in the chapter title itself, but I’ll leave that for you to discover.
Effective May 4, U.S. Citizenship and Immigration Services (USCIS) announced a Temporary Final Rule (TFR) to increase the automatic extension period of expiring employment authorization documents (EADs) for certain renewal applicants from 180 days to 540 days.
Specifically, the TFR applies to three groups of applicants in EAD categories currently eligible for the previous 180-day automatic extension of employment authorization and EAD validity. They are as follows:
Renewal applicants whose renewal Form I-765 application remains pending as of May 4, 2022, and whose EAD has not expired or whose current 180-day auto-extension has not yet lapsed.
New renewal applicants who file Form I-765 during the 18-month period following the rule’s publication to avoid a future gap in employment authorization and/or documentation.
Renewal applicants with a pending EAD renewal application whose 180-day automatic extension has lapsed and whose EAD has expired will be granted an additional period of employment authorization and EAD validity beginning on May 4, 2022, and lasting up to 540 days from the expiration date of their EAD.
Categories that are eligible for the lengthened automatic extension can be found here and include refugees and asylees (a3 and a5), spouses of certain H-1B principal non-immigrants with an unexpired I-94 showing H-4 non-immigrant status (c26), and adjustment of status applicants (c9), among others.
The TFR is part of a trio of efforts USCIS announced on March 29, 2022, to address the agency’s major backlogs and crisis-level processing delays. According to USCIS Director Ur M. Jaddou, “as USCIS works to address pending EAD caseloads, the agency has determined that the current 180-day automatic extension for employment authorization is currently insufficient,” and this temporary rule is necessary to “provide those non-citizens otherwise eligible for the automatic extension an opportunity to maintain employment and provide critical support for their families, while avoiding further disruption for U.S. employers.”
CUPA-HR will continue to monitor the implementation of the new auto-extension period and keep members apprised of further developments.