A superior court judge in California ruled last week that adjunct faculty in the Long Beach Community College District should be paid for work they do outside the classroom, including lesson prep, grading and holding office hours, EdSource reported.
The ruling came in response to a lawsuit filed in April 2022 by two part-time professors who argued that they are only paid for time spent teaching in the classroom, and that “failing to compensate adjuncts for out-of-classroom work is a minimum wage violation,” according to the decision by Judge Stuart Rice.
Rice concurred, noting “a myriad of problems” with the district’s argument that minimum wage rules don’t apply, EdSource reported.
Still, Rice stayed the decision pending further proceedings, so it doesn’t go into effect immediately. A similar lawsuit is under way in Sacramento County, brought by adjuncts against 22 community college districts, as well as the state community college system and its Board of Governors.
Adjunct professor John Martin, who chairs the California Part-time Faculty Association and is a plaintiff in the Sacramento case, celebrated the Long Beach ruling.
“It’s spot-on with what we have been saying,” he told EdSource. “We’re not getting paid for outside [the classroom] work. This has been a long time coming.”
Since the public release of ChatGPT in late 2022, artificial intelligence has rocketed from relative obscurity to near ubiquity. The rate of adoption for generative AI tools has outpaced that of personal computers and the internet. There is widespread optimism that, on one hand, AI will generate economic growth, spur innovation and elevate the role of quintessential “human work.” On the other hand, there’s palpable anxiety that AI will disrupt the economy through workforce automation and exacerbate pre-existing inequities.
Historyshowsthat education and training are key factors for weathering economic volatility. Yet, it is not entirely clear how postsecondary education providers can equip learners with the resources they need to thrive in an increasingly AI-driven workforce.
Here at the University of Tennessee, Knoxville’s Education Research and Opportunity Center, we are leading a three-year study in partnership with the Tennessee Board of Regents, Advance CTE and the Association for Career and Technical Education to explore this very subject. So far, we have interviewed more than 20 experts in AI, labor economics, career and technical education (CTE), and workforce development. Here are three things you should know.
Generative AI is the present, not the future.
First, AI is not new. ChatGPT continues to captivate attention because of its striking ability to reason, write and speak like a human. Yet, the science of developing machines and systems to mimic human functions has existed for decades. Many people are hearing about machine learning for the first time, but it has powered their Netflix recommendations for years. That said, generative AI does represent a leap forward—a big one. Simple machine learning cannot compose a concerto, write and debug computer code, or generate a grocery list for your family. Generative AI can do all of these things and infinitely more. It certainly feels futuristic, but it is not; AI is the present. And the generative AI of the present is not the AI of tomorrow.
Our interviews with experts have made clear that no one knows where AI will be in 15, 10 or even five years, but the consensus predicts the pace of change will be dramatic. How can students, education providers and employers keep up?
First, we cannot get hung up on specific tools, applications or use cases. The solution is not simply to incorporate ChatGPT in the classroom, though this is a fine starting point. We are in a speeding vehicle; our focus out the window needs to be on the surrounding landscape, not the passing objects. We need education policies that promote organizational efficiency, incentivize innovation and strengthen public-private partnerships. We need educational leadership focused on the processes, infrastructure and resources required to rapidly deploy technologies, break down disciplinary silos and guarantee learner safeguards. We need systemic and sustained professional development and training for incumbent faculty, and we need to reimagine how we prepare and hire new faculty. In short, we need to focus on building more agile, more adaptable, less siloed and less reactive institutions and classrooms because generative AI as we know it is not the future; AI is a harbinger of what is to come.
Focus on skills, not jobs.
It is exceedingly difficult to predict which individual occupations will be impacted—positively or negatively—by AI. We simply cannot know for certain whether surgeons or meat slaughterers are at greatest risk of AI-driven automation. Not only is it guesswork, but it is also flawed thinking, rooted in a misunderstanding of how technology impacts work. Tasks constitute jobs, jobs constitute occupations and occupations constitute industries. Lessons from prior technological innovations tell us that technologies act on tasks directly, and occupations only indirectly. If, for example, the human skill required to complete a number of job-related tasks can be substituted by smart machines, the skill composition of the occupation will change. An entire occupation can be eliminated if a sufficiently high share of the skills can be automated by machines. That said, it is equally true (and likely) that new technologies can shift the skill composition of an occupation in a way that actually enhances the demand for human workers. Shifts in demands for skills within the labor market can even generate entirely new jobs. The point is that the traditional approach to thinking of education in terms of majors, courses and degrees does learners a disservice.
By contrast, our focus needs to be on the skills learners acquire, regardless of discipline or degree pathway. A predictable response to the rise of AI is to funnel more learners into STEM and other supposed AI-ready majors. But our conversations, along with existing research, suggest learners can benefit equally from majoring in liberal studies or art history so long as they are equipped with in-demand skills that cannot (yet) be substituted by smart machines.
We can no longer allow disciplines to “own” certain skills. Every student, across every area of study, must be equipped with both technical and transferable skills. Technical skills allow learners to perform occupation-specific tasks. Transferable skills—such as critical thinking, adaptability and creativity—transcend occupations and technologies and position learners for the “work of the future.” To nurture this transition, we need innovative approaches to packaging and delivering education and training. Institutional leaders can help by equipping faculty with professional development resources and incentives to break out of disciplinary silos. We also need to reconsider current approaches to institutional- and course-level assessment. Accreditors can help by pushing institutions to think beyond traditional metrics of institutional effectiveness.
AI itself is a skill, and one you need to have.
From our conversations with experts, one realization is apparent: There are few corners of the workforce that will be left untouched by AI. Sure, AI is not (yet) able to unclog a drain, take wedding photos, install or repair jet engines, trim trees, or create a nurturing kindergarten classroom environment. But AI will, if it has not already, change the ways in which these jobs are performed. For example, AI-powered software can analyze plumbing system data to predict problems, such as water leaks, before they happen. AI tools can similarly analyze aircraft systems, sensors and maintenance records to predict aircraft maintenance needs before they become hazardous, minimizing aircraft downtime. There is a viable AI use case for every industry now. The key factor for thriving in the AI economy is, therefore, the ability to use AI effectively and critically regardless of one’s occupation or industry.
AI is good, but it is not yet perfect. Jobs still require human oversight. Discerning the quality of sources or synthesizing contradictory viewpoints to make meaningful judgments remain uniquely human skills that cut across all occupations and industries. To thrive in the present and future of work, we must embrace and nurture this skill set while effectively collaborating with AI technology. This effective collaboration itself is a skill.
To usher in this paradigm shift, we need federal- and state-level policymakers to prioritize AI user privacy and safety so tools can be trusted and deployed rapidly to classrooms across the country. It is also imperative that we make a generational investment in applied research in human-AI interaction so we can identify and scale best practices. In the classroom, students need comprehensive exposure to and experience with AI at the beginnings and ends of their programs. It is a valuable skill to work well with others, and in a modern era, it is equally necessary to work well with machines. Paraphrasing Jensen Huang, the CEO of Nvidia: Students are not going to lose their jobs to AI; they will lose their jobs to someone who uses AI.
Cameron Sublett is associate professor and director of the Education Research and Opportunity Center at the University of Tennessee, Knoxville. Lauren Mason is a senior research associate within the Education Research and Opportunity Center.
The Higher Education Inquirer (HEI) champions the rights of academic workers and critically examines the changing landscape of work in higher education, connecting it to broader economic trends
Focus on Adjunct Faculty and Labor Conditions:
HEI frequently highlights the precarious working conditions of adjunct faculty (grad assistants, contingent instructors, and researchers) who make up a significant portion of the teaching workforce in higher education, especially in online programs. It draws attention to issues such as low pay, lack of job security, limited benefits, and the increasing reliance on contingent labor in academia. This coverage exposes the exploitation of academic workers and its impact on educational quality.
Connection Between Education and Employment:
The Higher Education Inquirer explores the link between higher education and the job market, questioning whether certain programs adequately prepare students for gainful employment. It raises concerns about “hypercredentialism,” where degrees become mere “tickets to be punched” without necessarily leading to meaningful work or sufficient income to repay student loans. HEI investigates the job placement rates of graduates from different types of institutions, particularly for-profit colleges and online programs, and highlights instances where these rates may be misleading or inflated.
Impact of Technology on Work:
The Higher Education Inquirer examines how technology is changing the nature of work, both within and outside of higher education. It discusses the rise of the “gig economy” and the increasing prevalence of precarious employment in the tech sector and related industries. The publication explores the potential for automation and artificial intelligence to displace human workers, raising concerns about job security and the future of work. This technological shift is often driven by corporate interests, which HEI critically examines.
Critique of Corporate Influence and Profit-Driven Models:
HEI is critical of the increasing influence of corporations and profit-driven models in higher education and the broader economy. We argue that the pursuit of profit often comes at the expense of workers’ rights, job quality, and the overall well-being of individuals. This critique extends to the “tech bro” culture and its emphasis on maximizing profits and technological advancement, often without regard for the social and economic consequences.
Advocacy for Workers and a More Equitable Economy:
The Higher Education Inquirer advocates for fair labor practices, decent wages, and greater economic equality. It supports efforts to organize workers and challenge exploitative practices in various industries, including higher education. The publication promotes a more human-centered approach to work, emphasizing the importance of meaningful employment, job security, and a balance between work and life.
The Higher Education Inquirer provides significant coverage of
labor strikes, particularly those within the higher education sector.
HEI offers detailed
accounts of specific labor strikes, providing context, timelines, and
analysis of the issues at stake. For example, they’ve covered:
Focus on the Underlying Issues: The Higher Education Inquirer goes beyond simply reporting on the events of a strike. They delve into the root causes, such as: low wages and inadequate benefits for academic workers (including graduate students, adjuncts, and other staff), job insecurity and the increasing reliance on contingent labor, issues related to fair contracts, bargaining in good faith, and protection of union activity, and the impact of university policies and management decisions on workers’ rights and well-being.
Highlighting the Voices of Workers:
HEI often includes the
perspectives and experiences of the striking workers themselves, giving
them a platform to share their stories and explain their reasons for
striking. This humanizes the issues and provides a more personal
understanding of the impact of labor disputes.
Connecting Strikes to Broader Trends
The Higher Education Inquirer connects individual strikes to larger trends in higher education and the economy, such as: The increasing corporatization of universities. The rise of precarious employment and the gig economy. The growing gap between executive compensation and worker wages. The impact of austerity measures and budget cuts on public institutions.
Advocacy for Workers’ Rights and Collective Action
HEI
supports the right of workers to organize and strike for better working
conditions. They frame labor strikes as a legitimate and necessary tool
for workers to exercise their power and demand fair treatment.
The Higher Education Inquirer views the nature of work as an integral part of the larger discussion about higher education. It recognizes that education is often linked to employment outcomes and that the quality of work available to graduates is a crucial factor in determining the value of a degree. By examining the working conditions of academic staff, the connection between education and employment, and the broader impact of technology and corporate influence on the labor market, the Higher Education Inquirer provides a comprehensive and critical perspective on the nature of work in the 21st century.
“Censorship” built into rapidly growing generative artificial intelligence tool DeepSeek could lead to misinformation seeping into students’ work, scholars fear.
But with students likely to start using the tool for research and help with assignments, concerns have been raised that it is censoring details about topics that are sensitive in China and pushing Communist Party propaganda.
When asked questions centering on the 1989 Tiananmen Square massacre, reports claim that the chat bot replies that it is “not sure how to approach this type of question yet,” before adding, “Let’s chat about math, coding and logic problems instead!”
When asked about the status of Taiwan, it replies, “The Chinese government adheres to the One China principle, and any attempts to split the country are doomed to fail.”
Shushma Patel, pro vice chancellor for artificial intelligence at De Montfort University—said to be the first role of its kind in the U.K.—described DeepSeek as a “black box” that could “significantly” complicate universities’ efforts to tackle misinformation spread by AI.
“DeepSeek is probably very good at some facts—science, mathematics, etc.—but it’s that other element, the human judgment element and the tacit aspect, where it isn’t. And that’s where the key difference is,” she said.
Patel said that students need to have “access to factual information, rather than the politicized, censored propaganda information that may exist with DeepSeek versus other tools,” and said that the development heightens the need for universities to ensure AI literacy among their students.
Thomas Lancaster, principal teaching fellow of computing at Imperial College London, said, “From the universities’ side of things, I think we will be very concerned if potentially biased viewpoints were coming through to students and being treated as facts without any alternative sources or critique or knowledge being there to help the student understand why this is presented in this way.
“It may be that instructors start seeing these controversial ideas—from a U.K. or Western viewpoint—appearing in student essays and student work. And in that situation, I think they have to settle this directly with the student to try and find out what’s going on.”
However, Lancaster said, “All AI chat bots are censored in some way,” which can be for “quite legitimate reasons.” This can include censoring material relating to criminal activity, terrorism or self-harm, or even avoiding offensive language.
He agreed that “the bigger concern” highlighted by DeepSeek was “helping students understand how to use these tools productively and in a way that isn’t considered unfair or academic misconduct.”
This has potential wider ramifications outside of higher education, he added. “It doesn’t only mean that students could hand in work that is incorrect, but it also has a knock-on effect on society if biased information gets out there. It’s similar to the concerns we have about things like fake news or deepfake videos,” he said.
Questions have also been raised over the use of data relating to the tool, since China’s national intelligence laws require enterprises to “support, assist and cooperate with national intelligence efforts.” The chat bot is not available on some app stores in Italy due to data-related concerns.
While Patel conceded there were concerns over DeepSeek and “how that data may be manipulated,” she added, “We don’t know how ChatGPT manipulates that data, either.”
Higher Education Inquirer : ‘Father of Environmental Justice’ Robert Bullard on the Work Behind a Movement (Time)
‘Father of Environmental Justice’ Robert Bullard on the Work Behind a Movement (Time)
“This
isn’t happenstance,” remarked Gloria Walton, former TIME Earth Award
honoree, on the environmental justice movement being recognized as a
powerful force.
“It is a reality created by the energy and love of frontline communities
and grassroots organizations who have worked for decades,” Walton said,
as she presented an Earth Award to the man known as the “Father of
Environmental Justice,” Robert Bullard.
Bullard, who was appointed to the White House Environmental Justice
Advisory Council in 2021, spoke of the long fight he’s waged for
environmental justice in his acceptance speech. He discussed the
challenges that he faced in 1979, when he conducted a study in support
of the landmark case Bean v. Southwestern Waste Management Corps.— the
first lawsuit to challenge environmental racism in the United States.
“I am a sociologist and my sociology has taught me that it is not enough
to gather the data, do the science and write the books,” he said. “In
order for us to solve this kind of crisis, we must do our science, we
must gather our data, we must collect our facts, and we must marry those
facts with action.”
Job titles, and the names given to organisational roles, are important for the meaning that individuals derive from their work and their engagement with their work.
Yet within many UK universities, and especially the post-92s, the trend is towards new job titles with potentially negative connotations for the job holders in terms of the meaning of their work and their commitment to it and to their institution.
Such universities have been moving away from the conventional “lecturer” titles, adopting the US system of titles. US institutions typically designate their junior (un-tenured) academics as Assistant Professors, with an intermediate grade of Associate Professor and then a full Professor grade. Within the US system, most long serving and effective staff can expect to progress to full Professor by mid-career.
Yet, in this new UK system, only around 15-20 per cent of academics are (and likely ever to be) full Professors and many academics will spend their entire careers as Assistant Professors or Associate Professors, retiring with one of these diminutive job titles.
The previous, additive, job titles of Lecturer to Senior Lecturer and then to Principal Lecturer or Reader had meaning outside the university and, crucially, had meaning for the post-holders, giving a sense of achievement and pride as they progressed. Retiring as a Senior or Principal Lecturer was deemed more than acceptable.
Status and self-esteem
It is not hard to imagine the impact that the changes in job titles is having upon mid and late-career academics who may have little chance of gaining promotion to full professor, perhaps because quite simply they draw the line at working “just” 60 hours a week, 50 weeks a year. The impact on status and self-esteem is immense. Imagine explaining to your grandkids that you are, in essence, an assistant to a professor. As an Associate Professor, and particularly in a vocational discipline, one of the authors is often asked, “I can understand you wanting to work part-time for a university, but what’s your main job?” Associate, affiliate, adjunct – these names are pretty much the same thing to outsiders.
Managerially, though, the change from designating academics as Senior Lecturer to Assistant Professors and from Principal Lecturers to Associate Professors is genius. These diminutive job titles confer inferiority – but with the promise that if you keep your nose to the grindstone and keep up the 60+ hour weeks, 50 weeks a year, you might be in with a chance of a decent job title, as a professor. What a fantastic, and completely friction-free, way of turning the performative screw.
The UK university sector is not alone and other public sector organisations have similarly got into a meaning muddle from the naming of their jobs. For example, in the British civil service, a key middle management role is labelled “Grade B2+”, whereas a relatively junior operational role is designated a rather grand sounding “Executive Officer”. And just last autumn, the NHS acknowledged that names do matter, abandoning the designation of “junior” doctor which was used to encompass all medics that sit within the grades below what is known as “consultant”, and which their union described as “misleading and demeaning” – it’s been replaced with “resident” doctor.
Meaningful work
A name gives meaning to workers. It gives status, prestige, and identity. While those organisations such as universities who fail to realise the importance of job titles may be able to turn the screw in the short-term, extracting ever more work from their junior-sounding Assistant and Associate Professors, they will in the longer-term, for sure, have an ever more demoralised and demotivated workforce for whom the job has little meaning other than the pay.
And, since pay for university academics in the UK has been so badly eroded in recent decades, job title conventions are a self-inflicted injury – one that risks academics’ engagement and wellbeing and, ultimately, their institutions’ performance.
This audiobook narrated by Kate Harper examines how the financial pressures of paying for college affect the lives and well-being of middle-class families The struggle to pay for college is one of the defining features of middle-class life in America today. At kitchen tables all across the country, parents agonize over whether to burden their children with loans or to sacrifice their own financial security by taking out a second mortgage or draining their retirement savings. Indebted takes readers into the homes of middle-class families throughout the nation to reveal the hidden consequences of student debt and the ways that financing college has transformed family life.
Caitlin Zaloom gained the confidence of numerous parents and their college-age children, who talked candidly with her about stressful and intensely personal financial matters that are usually kept private. In this remarkable book, Zaloom describes the profound moral conflicts for parents as they try to honor what they see as their highest parental duty—providing their children with opportunity—and shows how parents and students alike are forced to take on enormous debts and gamble on an investment that might not pay off.
What emerges is a troubling portrait of an American middle class fettered by the “student finance complex”—the bewildering labyrinth of government-sponsored institutions, profit-seeking firms, and university offices that collect information on household earnings and assets, assess family needs, and decide who is eligible for aid and who is not. Superbly written and unflinchingly honest, Indebted breaks through the culture of silence surrounding the student debt crisis, revealing the unspoken costs of sending our kids to college.
What is the state of free speech on college campuses? More students now support shouting down speakers. Several institutions faced externalpressure from government entities to punish constitutionally protected speech. And the number of “red light” institutions — those with policies that significantly restrict free speech — rose for the second year in a row, reversing a 15-year trend of decreasing percentages of red light schools, according to FIRE research.
These are just a few of the concerns shared by FIRE’s Lead Counsel for Government Affairs Tyler Coward, who joined lawmakers, alumni groups, students, and stakeholders last week in a discussion on the importance of improving freedom of expression on campus.
Rep. Greg Murphy led the roundtable, along with Rep. Virginia Foxx, Chairwoman of the House Committee on Education and the Workforce, and Rep. Burgess Owens.
But the picture on campus isn’t all bad news. Tyler highlighted some positive developments, including: an increase in “green light” institutions — schools with written policies that do not seriously threaten student expression — along with commitments to institutional neutrality, and “more and more institutions are voluntarily abandoning their requirements that faculty and students submit so-called DEI statements for admission, application, promotion, and tenure review.”
Tyler noted the passage of the Respecting the First Amendment on Campus Act in the House. The bill requires public institutions of higher education to “ensure their free speech policies align with Supreme Court precedent that protects students’ rights — regardless of their ideology or viewpoint.” Furthermore, crucial Title IX litigation has resulted in the Biden rules being enjoined in 26 states due to concerns over due process and free speech.
Lastly, Tyler highlighted areas of concern drawn from FIRE’s surveys of students and faculty on campus, including the impact of student encampment protests on free expression on college campuses.
WATCH VIDEO: FIRE Lead Counsel for Government Affairs Tyler Coward delivers remarks at Rep. Greg Murphy’s 4th Annual Campus Free Speech Roundtable on Dec. 11, 2024.
Students across the political spectrum are facing backlash or threats of censorship for voicing their opinions. Jasmyn Jordan, an undergraduate student at University of Iowa and the National Chairwoman of Young Americans for Freedom, shared personal experiences of censorship YAF members have faced on campus due to their political beliefs. Gabby Dankanich, also from YAF, provided additional examples, including the Clovis Community College case. At Clovis, the administration ordered the removal of flyers YAF students posted citing a policy against “inappropriate or offensive language or themes.” (FIRE helped secure a permanent injunction on behalf of the students. Additionally, Clovis’s community college district will have to pay the students a total of $330,000 in damages and attorney’s fees.)
VICTORY: California college that censored conservative students must pay $330,000, adopt new speech-protective policy, and train staff
Press Release
Federal court orders Clovis and three other community colleges to stop discriminating against student-group speech based on viewpoint.
Conservative students aren’t the only ones facing challenges in expressing their ideas on campus. Kenny Xu, executive director of Davidsonians for Free Speech and Discourse, emphasized that free speech is not a partisan issue. Citing FIRE data, he noted that 70% of students feel at least somewhat uncomfortable publicly disagreeing with a professor in class. “I can assure you that 70% of students are not conservatives,” he remarked. Kyle Beltramini from the American Council of Trustees and Alumni, reinforced this point. Sharing findings from ACTA’s own research, he emphasized that “this is not a problem faced by a single group of students but rather an experience shared across the ideological spectrum.”
The roundtable identified faculty as a critical part of the solution, though they acknowledged faculty members often fear speaking up. FIRE’s recent survey of over 6,000 faculty across 55 U.S. colleges and universities supports this claim. According to the results, “35% of faculty say they recently toned down their writing for fear of controversy, compared to 9% who said the same during the McCarthy era.”
While this data underscores the challenges faculty face, it also points to a broader issue within higher education. Institutions, Tyler said, have a dual obligation to “ensure that speech rights are protected” and that “students remain free from harassment based on a protected characteristic.” Institutions did not get this balance right this year. But, ACTA’s Kyle Beltramini noted the positive development that these longstanding issues have finally migrated into the public consciousness: “By and large, policy makers and the public have been unaware of the vast censorial machines that colleges and universities have been building up to police free speech, enforce censorship, and maintain ideological hegemony in the name of protecting and supporting their students,” he stated. This moment presents an opportunity to provide constructive feedback to institutions to hopefully address these shortcomings.
FIRE thanks Rep. Murphy for the opportunity to contribute to this vital conversation. We remain committed to working with legislators who share our dedication to fostering a society that values free inquiry and expression.
Alumni are also speaking up, and at the roundtable they shared their perspectives on promoting free speech and intellectual diversity in higher education. Among them was Tom Neale, UVA alumnus and president of The Jefferson Council and the Alumni Free Speech Alliance, who highlighted the importance of connecting with alumni from institutions like Cornell, Davidson, and Princeton, since they’re “all united by their common goal to restore true intellectual diversity and civil discourse in American higher-ed.”
Other participants at the roundtable included members of Speech First, and Princetonians for Free Speech.
So what can be done? Participants proposed several solutions, including passing legislation that prohibits the use of political litmus tests in college admissions, hiring, and promotion decisions. They also suggested integrating First Amendment education into student orientation programs to ensure incoming undergraduates understand their rights and responsibilities on campus. Additionally, they emphasized the importance of developing programs that teach students how to engage constructively in disagreements — rather than resorting to censorship — and to promote curiosity, dissent, talking across lines of difference, and an overall culture of free expression on campus.
FIRE thanks Rep. Murphy for the opportunity to contribute to this vital conversation. We remain committed to working with legislators who share our dedication to fostering a society that values free inquiry and expression.
You can watch the roundtable on Rep. Murphy’s YouTube channel.
A friend recently asked me for advice on a problem he was wrestling with related to an issue he was having with a 1EdTech interoperability standard. It was the same old problem of a standard not quite getting true interoperability because people implement it differently. I suggested he try using a generative AI tool to fix his problem. (I’ll explain how shortly.)
I don’t know if my idea will work yet—he promised to let me know once he tries it—but the idea got me thinking. Generative AI probably will change EdTech integration, interoperability, and the impact that interoperability standards can have on learning design. These changes, in turn, impact the roles of developers, standards bodies, and learning designers.
In this post, I’ll provide a series of increasingly ambitious use cases related to the EdTech interoperability work of 1EdTech (formerly known as IMS Global). In each case, I’ll explore how generative could impact similar work going forward, how it changes the purpose of interoperability standards-making, and how it impacts the jobs and skills of various people whose work is touched by the standards in one way or another.
Generative AI as duct tape: fixing QTI
1EdTech’s Question Test Interoperability (QTI) standard is one of its oldest standards that’s still widely used. The earliest version on the 1EdTech website dates back to 2002, while the most recent version was released in 2022. You can guess from the name what it’s supposed to do. If you have a test, or a test question bank, in one LMS, QTI is supposed to let you migrate it into another without copying and pasting. It’s an import/export standard.
It never worked well. Everybody has their own interpretation of the standard, which means that importing somebody else’s QTI export is never seamless. When speaking recently about QTI to a friend at an LMS company, I commented that it only works about 80% of the time. My friend replied, “I think you’re being generous. It probably only works about 40% of the time.” 1EdTech has learned many lessons about achieving consistent interoperability in the decades since QTI was created. But it’s hard to fix a complex legacy standard like this one.
Meanwhile, the friend I mentioned at the top of the post asked me recently about practical advice for dealing with this state of affairs. His organization imports a lot of QTI question banks from multiple sources. So his team spends a lot of time debugging those imports. Is there an easier way?
I thought about it.
“Your developers probably have many examples that they’ve fixed by hand by now. They know the patterns. Take a handful of before and after examples. Embed them into a prompt in a generative AI that’s good at software code, like Hugging Chat. [As I was drafting this post, OpenAI announced that ChatGPT now has a code interpreter.] “Then give the generative AI a novel input and see if it produces the correct output.”
Generative AI are good at pattern matching. The differences in QTI implementations are likely to have patterns to them that an LLM can detect, even if those differences change over time (because, for example, one vendor’s QTI implementation changed over time).
In fact, pattern matching on this scale could work very well with a smaller generative AI model. We’re used to talking about ChatGPT, Google Bard, and other big-name systems that have between half a billion and a billion transformers. Think of transformers as computing legos. One major reason that ChatGPT is so impressive is that it uses a lot of computing legos. Which makes it expensive, slow, and computationally intensive. But if your goal is to match patterns against a set of relatively well-structured set of texts such as QTI files, you could probably train a much smaller model than ChatGPT to reliably translate between implementations for you. The smallest models, like Vicuña LLM, are only 7 billion transformers. That may sound like a lot but it’s small enough to run on a personal computer (or possibly even a mobile phone). Think about it this way: The QTI task we’re trying to solve for is roughly equivalent in complexity to the spell-checking and one-word type-ahead functions that you have on your phone today. A generative AI model for fixing QTI imports could probably be trained for a few hundred dollars and run for pennies.
This use case has some other desirable characteristics. First, it doesn’t have to work at high volume in real time. It can be a batch process. Throw the dirty dishes in the dishwasher, turn it on, and take out the clean dishes when the machine shuts off. Second, the task has no significant security risks and wouldn’t expose any personally identifiable information. Third, nothing terrible happens if the thing gets a conversion wrong every now and then. Maybe the organization would have to fix 5% of the conversions rather than 100%. And overall, it should be relatively cheap. Maybe not as cheap as running an old-fashioned deterministic program that’s optimized for efficiency. But maybe cheap enough to be worth it. Particularly if the organization has to keep adding new and different QTI implementation imports. It might be easier and faster to adjust the model with fine-tuning or prompting than it would be to revise a set of if/then statements in a traditional program.
How would the need for skilled programmers change? Somebody would still need to understand how the QTI mappings work well enough to keep the generative AI humming along. And somebody would have to know how to take care of the AI itself (although that process is getting easier every day, especially for this kind of a use case). The repetitive work they are doing now would be replaced by the software over time, freeing up the human brains for other things that human brains are particularly good at. In other words, you can’t get rid of your programmer but you can have that person engaging in more challenging, high-value work than import bug whack-a-mole.
How does it change the standards-making process? In the short term, I’d argue that 1EdTech should absolutely try to build an open-source generative AI of the type I’m describing rather than trying to fix QTI, which is a task they’ve not succeeded in doing over 20 years. This strikes me as a far shorter path to achieving the original purpose for which QTI was intended, which is to move question banks from one system to another.
This conclusion, in turn, leads to a larger question: Do we need interoperability standards bodies in the age of AI?
My answer is a resounding “yes.”
Going a step further: software integration
QTI provides data portability but not integration. It’s an import/export format. The fact that Google Docs can open up a document exported from Microsoft Word doesn’t mean that the two programs are integrated in any meaningful way.
So let’s consider Learning Tool Interoperability (LTI). LTI was quietly revolutionary. Before it existed, any company building a specialized educational tool would have to write separate integrations for every LMS.
The nature of education is that it’s filled with what folks in the software industry would disparagingly call “point solutions.” If you’re teaching students how to program in python, you need a python programming environment simulator. But that tool won’t help a chemistry professor who really needs virtual labs and molecular modeling tools. And none of these tools are helpful for somebody teaching English composition. There simply isn’t a single generic learning environment that will work well for teaching all subjects. None of these tools will ever sell enough to make anybody rich.
Therefore, the companies that make these necessary niche teaching tools will tend to be small. In the early days of the LMS, they couldn’t afford to write a separate integration for every LMS. Which meant that not many specialized learning tools were created. As small as these companies’ target markets already were, many of them couldn’t afford to limit themselves to the subset of, say, chemistry professors whose universities happened to use Blackboard. It didn’t make economic sense.
LTI changed all that. Any learning tool provider could write integration once and have their product work with every LMS. Today, 1EdTech lists 240 products that are officially certified as supporting LTI interoperability standard. Many more support the standard but are not certified.
Would LTI have been created in a world in which generative AI existed? Maybe not. The most straightforward analogy is Zapier, which connects different software systems via their APIs. ChatGPT and its ilk could act as instant Zapier. A programmer using generative AI could use the API documentation of both systems, ask the generative AI to write integration to perform a particular purpose, and then ask the same AI for help with any debugging.
Again, notice that one still needs a programmer. Somebody needs to be able to read the APIs, understand the goals, think about the trade-offs, give the AI clear instructions, and check the finished program. The engineering skills are still necessary. But the work of actually writing the code is greatly reduced. Maybe by enough that generative AI would have made LTI unnecessary.
But probably not. LTI connections pass sensitive student identity and grade information back and forth. It has to be secure and reliable. The IT department has legal obligations, not to mention user expectations, that a well-tested standard helps alleviate (though not eliminate). On top of that, it’s just a bad idea to have spread bits of glue code here, there, and everywhere, regardless of whether a human or a machine writes it. Somebody—an architect—needs to look at the big picture. They need to think about maintainability, performance, security, data management, and a host of other concerns. There is value in having a single integration standard that has been widely vetted and follows a pattern of practices that IT managers can handle the same way across a wide range of product integrations.
At some point, if a software integration fails to pass student grades to the registrar or leaks personal data, a human is responsible. We’re not close to the point where we can turn over ethical or even intellectual responsibility for those challenges to a machine. If we’re not careful, generative AI will simply write spaghetti code much faster the old days.
The social element of knowledge work
More broadly, there are two major value components to the technical interoperability standards process. The first is obvious: technical interoperability. It’s the software. The second is where the deeper value lies. It’s in the conversation that leads to the software. I’ve participated in a 1EdTech specification working group. When the process went well, we learned from each other. Each person at that table brought a different set of experiences to an unsolved problem. In my case, the specification we were working on sent grade rosters from the SIS to the LMS and final grades back from the LMS to the SIS. It sounds simple. It isn’t. We each brought different experiences and lessons learned regarding many aspects of the problem, from how names are represented in different cultures to how SIS and LMS users think differently in ways that impact interoperability. In the short term, a standard is always a compromise. Each creator of a software system has to make adjustments that accommodate the many ways in which others thought differently when they built their own systems. But if the process works right, everybody goes home thinking a little differently about how their systems could be built better for everybody’s benefit. In the longer term, the systems we continue to build over time reflect the lessons we learn from each other.
Generative AI could make software integration easier. But without the conversation of the standards-making process, we would lose the opportunity to learn from each other. And if AI can reduce the time and cost of the former, then maybe participants in the standards-making effort will spend more time and energy on the latter. The process would have to be rejiggered somewhat. But at least in some cases, participants wouldn’t have to wait until the standard was finalized before they started working on implementing it. When the cost of implementation is low enough and the speed is fast enough, the process can become more of an iterative hackathon. Participants can build working prototypes more quickly. They would still have to go back to their respective organizations and do the hard work of thinking through the implications, finding problems or trade-offs and, eventually, hardening the code. But at least in some cases, parts of the standards-making process could be more fluid and rapidly iterative than they have been. We could learn from each other faster.
This same principle could apply inside any organization or partnership in which different groups are building different software components that need to work together. Actual knowledge of the code will still be important to check and improve the work of the AI in some cases and write code in others. Generative AI is not ready to replace high-quality engineers yet. But even as it improves, humans will still be needed.
Anthopologist John Seely Brown famously traced the drop in Xerox copier repair quality to a change in its lunch schedule for their repair technicians. It turns out that technicians learn a lot from solving real problems in the field and then sharing war stories with each other. When the company changed the schedule so that technicians had less time together, repair effectiveness dropped noticeably. I don’t know if a software program was used to optimize the scheduling but one could easily imagine that being the case. Algorithms are good at concrete problems like optimizing complex schedules. On the other hand, they have no visibility into what happens at lunch or around the coffee pot. Nobody writes those stories down. They can’t be ingested and processed by a large language model. Nor can they be put together in novel ways by quirky human minds to come up with new insights.
That’s true in the craft of copier repair and definitely true in the craft of software engineering. I can tell you from direct experience that interoperability standards-making is much the same. We couldn’t solve the seemingly simple problem of getting the SIS to talk to the LMS until we realized that registrars and academics think differently about what a “class” or a “course” is. We figured that out by talking with each other and with our customers.
At its heart, standards-making is a social process. It’s a group of people who have been working separately on solving similar problems coming together to develop a common solution. They do this because they’ve decided that the cost/benefit ratio of working together is better than the ratio they’ve achieved when working separately. AI lowers the costs of some work. But it doesn’t yet provide an alternative to that social interaction. If anything, it potentially lowers some of the costs of collaboration by making experimentation and iteration cheaper—if and only if the standards-making participants embrace and deliberately experiment with that change.
That’s especially true the more 1EdTech tries to have a direct role in what it refers to as “learning impact.”
The knowledge that’s not reflected in our words
In 2019, I was invited to give a talk at a 1EdTech summit, which I published a version of under the title “Pedagogical Intent and Designing for Inquiry.” Generative AI was nowhere on the scene at the time. But machine learning was. At the same time, long-running disappointment and disillusionment with learning analytics—analytics that actually measure students’ progress as they are learning—was palpable.
I opened my talk by speculating about how machine learning could have helped with SIS/LMS integration, much as I speculated earlier in the post about how generative AI might help with QTI:
Now, today, we would have a different possible way of solving that particular interoperability problem than the one we came up with over a decade ago. We could take a large data set of roster information exported from the SIS, both before and after the IT professionals massaged it for import into the LMS, and aim a machine learning algorithm at it. We then could use that algorithm as a translator. Could we solve such an interoperability problem this way? I think that we probably could. I would have been a weaker product manager had we done it that way, because I wouldn’t have gone through the learning experience that resulted from the conversations we had to develop the specification. As a general principle, I think we need to be wary of machine learning applications in which the machines are the only ones doing the learning. That said, we could have probably solved such a problem this way and might have been able to do it in a lot less time than it took for the humans to work it out.
I will argue that today’s EdTech interoperability challenges are different. That if we want to design interoperability for the purposes of insight into the teaching and learning process, then we cannot simply use clever algorithms to magically draw insights from the data, like a dehumidifier extracting water from thin air. Because the water isn’t there to be extracted. The insights we seek will not be anywhere in the data unless we make a conscious effort to put them there through design of our applications. In order to get real teaching and learning insights, we need to understand the intent of the students. And in order to understand that, we need insight into the learning design. We need to understand pedagogical intent.
That new need, in turn, will require new approaches in interoperability standards-making. As hard as the challenges of the last decade have been, the challenges of the next one are much harder. They will require different people at the table having different conversations.
The core problem is that the key element for interpreting both student progress and the effectiveness of digital learning experiences—pedagogical intent—is not encoded in most systems. No matter how big your data set is, it doesn’t help you if the data you need aren’t in it. For this reason, I argued, fancy machine learning tricks aren’t going to give us shortcuts.
That problem is the same, and perhaps even worse in some ways, with generative AI. All ChatGPT knows is what it’s read on the internet. And while it’s made progress in specific areas at reading between the lines, the fact is that important knowledge, including knowledge about applied learning design, simply is extremely scarce in the data it can access and even in the data living in our learning systems that it can’t access.
The point of my talk was that interoperability standards could help by supplying critical metadata—context—if only the standards makers set that as their purpose, rather than simply making sure that quiz questions end up in the right place when migrating from one LMS to another.
I chose to open the talk by highlighting the ambiguity of language that enables us to make art. I chose this passage from Shakespeare’s final masterpiece, The Tempest:
O wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world That has such people in’t!
William Shakespeare, The Tempest
It’s only four lines. And yet it is packed with double entendres and the ambiguity that gives actors room to make art:
Here’s the scene: Miranda, the speaker, is a young woman who has lived her entire life on an island with nobody but her father and a strange creature who she may think of as a brother, a friend, or a pet. One day, a ship becomes grounded on the shore of the island. And out of it comes, literally, a handsome prince, followed by a collection of strange (and presumably virile) sailors. It is this sight that prompts Miranda’s exclamation.
As with much of Shakespeare, there are multiple possible interpretations of her words, at least one of which is off-color. Miranda could be commenting on the hunka hunka manhood walking toward her.
“How beauteous mankind is!”
Or. She could be commenting on how her entire world has just shifted on its axis. Until that moment, she knew of only two other people in all of existence, each of who she had known her entire life and with each of whom she had a relationship that she understood so well that she took it for granted. Suddenly, there was literally a whole world of possible people and possible relationships that she had never considered before that moment.
“O brave new world / That has such people in’t”
So what is on Miranda’s mind when she speaks these lines? Is it lust? Wonder? Some combination of the two? Something else?
The text alone cannot tell us. The meaning is underdetermined by the data. Only with the metadata supplied by the actor (or the reader) can we arrive at a useful interpretation. That generative ambiguity is one of the aspects of Shakespeare’s work that makes it art.
But Miranda is a fictional character. There is no fact of the matter about what she is thinking. When we are trying to understand the mental state of a real-life human learner, then making up our own answer because the data are not dispositive is not OK. As educators, we have a moral responsibility to understand a real-life Miranda having a real-life learning experience so that we can support her on her journey.
Generative AI like ChatGPT can answer questions about different ways to interpret Miranda’s lines in the play because humans have written about this question and made their answers available on the internet. If you give the chatbot an unpublished piece of poetry and ask it for an interpretation, its answers are not likely to be reliably sophisticated. While larger models are getting better at reading between the lines—a topic for a future blog post—they are not remotely as good as humans are at this yet.
Making the implicit explicit
This limitation of language interpretation is central to the challenge of applying generative AI to learning design. ChatGPT has reignited fantasies about robot tutors in the sky. Unfortunately, we’re not giving the AI the critical information it needs to design effective learning experiences:
The challenge that we face as educators is that learning, which happens completely inside the heads of the learners, is invisible. We can not observe it directly. Accordingly, there are no direct constructs that represent it in the data. This isn’t a data science problem. It’s an education problem. The learning that is or isn’t happening in the students’ heads is invisible even in a face-to-face classroom. And the indirect traces we see of it are often highly ambiguous. Did the student correctly solve the physics problem because she understands the forces involved? Because she memorized a formula and recognized a situation in which it should be applied? Because she guessed right? The instructor can’t know the answer to this question unless she has designed a series of assessments that can disambiguate the student’s internal mental state.
In turn, if we want to find traces of the student’s learning (or lack thereof) in the data, we must understand the instructor’s pedagogical intent that motivates her learning design. What competency is the assessment question that the student answered incorrectly intended to assess? Is the question intended to be a formative assessment? Or summative? If it’s formative, is it a pre-test, where the instructor is trying to discover what the student knows before the lesson begins? Is it a check for understanding? A learn-by-doing exercise? Or maybe something that’s a little more complex to define because it’s embedded in a simulation? The answers to these questions can radically change the meaning we assign to a student’s incorrect answer to the assessment question. We can’t fully and confidently interpret what her answer means in terms of her learning progress without understanding the pedagogical intent of the assessment design.
But it’s very easy to pretend that we understand what the students’ answers mean. I could have chosen any one of many Shakespeare quotes to open this section, but the one I picked happens to be the very one from which Aldous Huxley derived the title of his dystopian novel Brave New World. In that story, intent was flattened through drugs, peer pressure, and conditioning. It was reduced to a small set of possible reactions that were useful in running the machine of society. Miranda’s words appear in the book in a bitterly ironic fashion from the mouth of the character John, a “savage” who has grown up outside of societal conditioning.
We can easily develop “analytics” that tell us whether students consistently answer assessment questions correctly. And we can pretend that “correct answer analytics” are equivalent to “learning analytics.” But they are not. If our educational technology is going to enable rich and authentic vision of learning rather than a dystopian reductivist parody of it, then our learning analytics must capture the nuances of pedagogical intent rather than flattening it.
A professor knows that her students tend to develop a common misconception that causes them to make practical mistakes when applying their knowledge. She very carefully crafts her course to address this misconception. She writes the content to address it. In her tests, she provides wrong answer choices—a.k.a. “distractors”—that students would choose if they had the misconception. She can tell, both individually and collectively, whether her students are getting stuck on the misconception by how often they pick the particular distractor that fits with their mistaken understanding. Then she writes feedback that the students see when they choose that particular wrong answer. She crafts it so that it doesn’t give away the correct answer but does encourage students to rethink their mistakes.
Imagine if all this information were encoded in the software. Their hierarchy would look something like this:
Here is learning objective (or competency) 1
Here is content about learning objective 1
Here is assessment question A about learning objective 1.
Here is distractor c in assessment question A. Distractor c addresses misconception alpha.
Here is feedback to distractor c. It is written specifically to help students rethink misconception alpha without giving away the answer to question A. This is critical because if we simply tell the student the answer to question A then we can’t get good data about the likelihood that the student has mastered learning objective 1.
All of that information is in the learning designer’s head and, somehow, implicitly embedded in the content in subtle details of the writing. But good luck teasing it out by just reading the textbook if you aren’t an experienced teacher of the subject yourself.
What if these relationships were explicit in the digital text? For individual students, we could tell which ones were getting stuck on a specific misconception. For whole courses, we could identify the spots that are causing significant numbers of students to get stuck on a learning objective or competency. And if that particular sticking point causes students to be more likely to fail either that course or a later course that relies on a correct understanding of a concept, then we could help more students persist, pass, stay in school, and graduate.
That’s how learning analytics can work if learning designers (or learning engineers) have tools that explicitly encode pedagogical intent into a machine-readable format. They can use machine learning to help them identify and smooth over tough spots where students tend to get stuck and fall behind. They can find the clues that help them identify hidden sticking points and adjust the learning experience to help students navigate those rough spots. We know this can work because, as I wrote about in 2012, Carnegie Mellon University (among others) has been refining this science and craft for decades.
Generative AI adds an interesting twist. The challenge with all this encoding of pedagogical intent is that it’s labor-intensive. Learning designers often don’t have time to focus on the work required to identify and improve small but high-value changes because they’re too busy getting the basics done. But generative AI that creates learning experiences modeled after thepedagogical metadata in the educational content it is trained on could provide a leg up. It could substantially speed up the work of writing the first-draft content so that designers can focus on the high-value improvements that humans are still better at than machines.
Realistically, for example, generative AI is not likely to know particular common misconceptions that block students from mastering a competency. Or how to probe for and remediate those misconceptions. But if were trained on the right models, it could generate good first-draft content through a standards-based metadata format that could be imported into a learning platform. The format would have explicit placeholders for those critical probes and hints. Human experts. supported by machine learning. could focus their time on finding and remediating these sticking points in the learning process. Their improvements would be encoded with metadata, providing the AI with better examples of what effective educational content looks like. Which would enable the AI to generate better first-draft content.
1EdTech could help bring about such a world through standards-making. But they’d have to think about the purpose of interoperability differently, bring different people to the table, and run a different kind of process.
O brave new world that has such skilled people in’t
I spoke recently to the head of product development for an AI-related infrastructure company. His product could enable me to eliminate hallucinations while maintaining references and links to original source materials, both of which would be important in generating educational content. I explained a more elaborate version of the basic idea in the previous section of this post.
“That’s a great idea,” he said. “I can think of a huge number of applications. My last job was at Google. The training was terrible.”
Google. The company that’s promoting the heck out of their free AI classes. The one that’s going to “disrupt the college degree” with their certificate programs. The one that everybody holds up as leading the way past traditional education and toward skills-based education.
Their training is “terrible.”
Yes. Of course it is. Because everybody’s training is terrible. Their learning designers have the same problem I described academic learning designers as having in the previous section. Too much to develop, too little time. Only much, much worse. Because they have far fewer course design experts (if you count faculty as course design experts). Those people are the first to get cut. And EdTech in the corporate space is generally even worse than academic EdTech. Worst of all? Nobody knows what anybody knows or what anybody needs to know.
Academia, including 1EdTech and several other standards bodies, funded by corporate foundations, are pouring incredible amounts of time, energy, and money into building a data pipeline for tracking skills. Skill taxonomies move from repositories to learning environments, where evidence of student mastery is attached to those skills in the form of badges or comprehensive learner records. Which are then sent off to repositories and wallets.
The problem is, pipelines are supposed to connect to endpoints. They move something valuable from the place where it is found to the place where it is needed. Many valuable skills are not well documented if they are documented at all. They appear quickly and change all the time. The field of knowledge management has largely failed to capture this information in a timely and useful way after decades of trying. And “knowledge” management has tended to focus on facts, which are easier to track than skills.
In other words, the biggest challenge that folks interested in job skills face is not an ocean of well-understood skill information that needs to be organized but rather a problem of non-consumption. There isn’t enough real-world, real-time skill information flowing into the pipeline and few people who have real uses for it on the other side. Almost nobody in any company turns to their L&D departments to solve the kinds of skills problems that help people become more productive and advance in their careers. Certainly not at scale.
But the raw materials for solving this problem exist. A CEO for HP once famously noted knows a lot. It just doesn’t know what it knows.
Knowledge workers do record new and important work-related information, even if it’s in the form of notes and rough documents. Increasingly, we have meeting transcripts thanks to videoconferencing and AI speech-to-text capabilities. These artifacts could be used to train a large language model on skills as they are emerging and needed. If we could dramatically lower the cost and time required to create just-in-time, just-enough skills training then the pipeline of skills taxonomies and skill tracking would become a lot more useful. And we’d learn a lot about how it needs to be designed because we’d have many more real-world applications.
The first pipeline we need is from skill discovery to learning content production. It’s a huge one, we’ve known about it for many decades, and we’ve made very little progress on it. Groups like 1EdTech could help us to finally make progress. But they’d have to rethink the role of interoperability standards in terms of the purpose and value of data, particularly in an AI-fueled world. This, in turn, would not only help match worker skills with labor market needs more quickly and efficiently but also create a huge industry of AI-aided learning engineers.
Summing it up
So where does this leave us? I see a few lessons:
In general, lowering the cost of coding through generative AI doesn’t eliminate the need for technical interoperability standards groups like 1EdTech. But it could narrow the value proposition for their work as currently applied in the market.
Software engineers, learning designers, and other skilled humans have important skills and tacit knowledge that don’t show up in text. It can’t be hoovered up by a generative AI that swallows the internet. Therefore, these skilled individuals will still be needed for some time to come.
We often gain access to tacit knowledge and valuable skills when skilled individuals talk to each other. The value of collaborative work, including standards work, is still high in a world of generative AI.
We can capture some of that tacit knowledge and those skills in machine-readable format if we set that as a goal. While doing so is not likely to lead to machines replacing humans in the near future (at least in the areas I’ve described in this post), it could lead to software that helps humans get more work done and spend more of their time working on hard problems that quirky, social human brains are good at solving.
1EdTech and its constituents have more to gain than to lose by embracing generative AI thoughtfully. While I won’t draw any grand generalizations from this, I invite you to apply the thought process of this blog post to your own worlds and see what you discover.
One thing that I have recently become very interested in is – “stay interviews”.
These types of interviews are very beneficial because they determine which factors keep a current employee engaged and which ones do not.
Think about it. Why do you decide to remain at your current job? What would entice you to leave? Perhaps a better offer?
This information is perfect for employers and who wish to attract millennials to their workplace.
Stay interviews are informal conversations
What to ask in a stay interview
Ask what would make your employee leave
How managers can stay accountable
Question 1 – What do you look forward to each day when you commute to work?
Question 2 -What are you learning here, and what do you want to learn?
Question 3 – Why do you stay here?
Question 4 – When is the last time you thought about leaving us, and what prompted it?
Question 5 – What can I do to make your job better for you?
This is especially important for rural workplaces where they struggle to attract and retain employees. It is especially difficult for rural employers. Let’s support them in any possible way we can.
Now, I do not have any direct reports at this time, but I have had a wealth of organizational leadership experiences throughout my 20 years in higher education. As a employee, i would not like to answer these questions. I would suggest that leaders determine which questions are most appropriate for their teams.
We do not want these “stay interviews” to be the first interview on a short journey to an “exit interview”.
In the comment box, let us know which questions you would add and which questions you would delete.