This post was kindly written by Vincent Everett, who is head of languages in a comprehensive school and sixth form in Norfolk. He blogs as The Nice Man Who Teaches Languages.
In Part 1, I looked at how the low grades given at GCSE languages – up to a grade lower than in pupils’ other subjects – is a manufactured situation, easily solved at the stroke of a pen. The narrative around languages being harder is nothing to do with the content of the course or the difficulty of the exam. It is simply a historical anomaly of how the grades are allocated. There is also a false narrative that this unfair grading is due to pupils’ individual ability, the nation’s ability, or the quality of teaching. And I made a subtle plea for commentators to avoid reinforcing this narrative to push their own diagnosis or solutions.
In Part 2, I will consider what happens in post-16 language learning. This has also been the subject of reporting in the wake of A-Level results and the recent HEPI report. I am not going to deny that A-Level languages are in crisis. But the crisis in A-Level and the crisis of language learning post-16 are not one and the same.
There are specific problems with the current A-Level specification for languages. The amount of content to be studied, comprising recondite details of every aspect of the Spanish / French / German speaking world, is unmanageable. Worse, as this post explains, the content is out of kilter with the exam. All the encyclopaedic knowledge of politics, history, popular culture and high culture which takes up the bulk of the course, is ultimately only required for one question in just one part of the Speaking Exam. The difficulty of the course is compounded by the extremely high standards required, especially for students who have learned their language in the school context. I personally know of language teachers and college leaders who have discouraged their own children from taking A-Level languages in order not to jeopardise their grades for university application. It is getting to the point where I can no longer, in good conscience, let ambitious students embark on the course without warning them of the overwhelming workload and doubtful outcomes.
So A-Level could be improved. But as an academic course, it will always remain the domain of a tiny few. Similarly, specialist Philology degrees at university – the academic study of the language through the intersection of literary and textual criticism, linguistics and the history of the language – only attract a very small minority. Neither university language degrees, nor A-Level, are a mainstream language learning pathway.
It is a particularly British mentality to only value language learning if its intellectual heft is boosted by the inclusion of essays, abstruse grammar, linguistics, literature, politics, history, and a study of culture. In other words, philology. Philology is not the same as language learning.
Universities do offer language learning opportunities for students of other disciplines. However, in sixth form, because of the funding requirement to offer Level 3 courses, there are no mainstream language learning options available to the vast majority of students who do not study A-Level languages. We have a gap in 16-19 provision where colleges do not offer a mainstream language learning pathway.
This gap is fatal to language study. It means GCSE is seen as a dead-end. It means that universities have a tiny pool of students ready and able to take up language degrees or degrees with languages as a component.
The crisis is not one of how to channel more people into studying A-Level languages. It is a question of finding radical new ways of offering mainstream language learning post-16, and how to make this the norm. We know from the HEPI report that young people in the UK are among the most avid users of the online language learning app Duolingo. Young people are choosing to engage with language learning, but in terms of formal education, we are leaving a two-year gap between GCSE and the opportunities offered by universities.
If this hiatus in language learning is the problem, is there a solution? I have two suggestions. One of which is relatively easy, if we agree that action is needed. If universities genuinely believe that a language is an asset, then they could send a powerful message to potential applicants.
Going to university means joining an international organisation, including the possibility of studying abroad, using languages for research, engaging with other students from across the globe, and quite possibly taking a language course while at university. The British Academy reports that universities are calling for language skills across research disciplines, so I hope that they would be able to send a strong message to students in schools and colleges.
The message around applications and admissions could be that evidence of studying a language or languages post-16 is something that universities look for. At the very least, they could signal that an interest in self-directed language learning is something they would value.
I understand that most universities would stop short of making a qualification in a language a formal entry requirement, because they fear it could exclude many applicants, especially those from disadvantaged groups. But a strong message could help reverse the situation where language learning opportunities are currently denied to many under-privileged school pupils, who aren’t getting the message around the value of pursuing a language.
And my second, more difficult suggestion? Would it be possible to plug the two-year gap with a provision at sixth form or college? An app such as Duolingo has attractions. There is the flexibility and independence of study, as well as the focus on motivation by level of learning, hours of study or points scored. It is very difficult to imagine how a sixth form or college could provide language classes for their varied intake from schools, with different language learning experiences in different languages.
Is there scope here for a new Oak Academy to step in and create resources? Or for the government to commission resources from an educational technology provider? Is there a role for universities here? The inspiring Languages for All project shows what can happen when a university engages with local schools to identify and tackle obstacles to language learning. The pilot saw Royal Holloway University working with schools across Hounslow, to increase participation at A-Level in a mutually beneficial partnership. Many of the strategies could equally apply to more mainstream (non A-Level) language learning partnerships. These included strong messaging, co-ordinated collaboration between colleges, face-to-face sessions and events at the university, and deployment of university students as mentors.
The aim would be to transform the landscape. Currently we have a dead-end GCSE where unfair grading serves as a deterrent, and where there is no mainstream option to make continuing with language learning the norm. A strong message from universities, along with an end to unfair grading, could make a big difference to uptake at GCSE. A realisation that A-Level and specialist philology degrees are not sufficient for the language learning needs of the country could lead to alternative, imaginative and joined-up options post-16. It could also boost the provision or recognition of self-study of a language and may even lead to the reinvigoration of adult education or university outreach language classes. And it could even see a larger pool of candidates for philology degrees at university.
“Colleges are now closing at a pace of one per week. What happens to the students?” Jon Marcus asked in a recent Hechinger Report piece. It’s not a rhetorical question — and it doesn’t have an easy answer. As educators, we’ve read the headlines, seen the numbers, and felt the pressure. Undergraduate enrollment is down. Student confidence is eroding. The enrollment cliff looms.
But instead of asking when higher education will fail, we might ask: What if this is a market correction — not a collapse? What if the problem isn’t higher education itself, but how we’ve framed its value and how we’ve taught? What if this moment is less an ending and more a beginning?
In the face of uncertainty, it’s tempting to focus on control: measurable learning outcomes, career-ready skills, standardized assessments. But today’s students are entering a competitive job market — and a world defined by accelerating change, emerging technologies, and challenges we haven’t yet named. That means our teaching needs to prepare them what’s likely and for what’s possible — and even what’s unknowable.
Ronald Beghetto’s framework for “educating for unknowable futures” offers a helpful lens. He proposes three levels of preparation:
Educating for likely futures: equipping students with foundational skills and durable knowledge.
Educating for possible futures: helping students build agency, creativity, and adaptability.
Educating for unknowable futures: inviting students to grapple with uncertainty through reflection and imagination.
Each level requires a shift in how we think about learning and a new set of pedagogical commitments.
1. Educating for Likely Futures: Redesigning Assignments Around Students’ Real Lives
Career readiness remains a core concern. But often, our tools for building it are misaligned with students’ actual experiences. Take the classic business case method, for example: many cases center Fortune 500 CEOs or global crises, which can feel abstract or inaccessible to undergraduates, especially first-generation students.
That’s why I now write my own cases: short, specific, and grounded in contexts my students know. In one recent one, I explored a conflict between student-athletes and faculty at a nearby Division III college. For my mostly student-athlete class, this was familiar and therefore grounding. Their analysis shifted. So did their engagement.
Designing assignments that reflect students’ likely futures — their majors, their industries, their regions — signals that their lives are valid sites of learning. It builds relevance. And it reminds them that professional decision-making doesn’t start “out there.” It starts here.
2. Educating for Possible Futures: Using EdTech with Purpose
Students also need to develop adaptive skills: how to think critically, navigate ambiguity, and evaluate tools in evolving environments. EdTech is a perfect place to practice this.
Today’s education market is flooded with tools — over 370 vendors across over 40 market segments, according to Encoura. But quantity isn’t quality. Too often, we adopt tools based on novelty or institutional trends rather than instructional value.
To support students in building discernment, we must model it ourselves. That means asking: Does this tool solve a real problem in my class? Does it deliver on its promises? Does it support learning equitably and sustainably?
In other words, we must shift from passive adopters to intentional evaluators and invite students into that evaluative process. Helping them think about how technology shapes learning (and their own agency within it) equips them for any environment, not just the one we’ve built.
3. Educating for Unknowable Futures: Making Space for Reflection
Preparing students for the truly unknown requires something more radical: making space for performance, yes, but also for reflection.
In a recent MBA course on negotiation and conflict, I made a bold move: I assigned weekly reflection journals — raw, stream-of-consciousness entries that linked course themes to students’ lived experiences. Some students resisted at first. But by the end, many said it changed the way they approached class and life.
Reflection is often treated as an add-on, something optional or “soft.” But it’s essential. It helps students surface assumptions, interrogate choices, and practice metacognition. And in a world where knowledge and skills are constantly evolving, the ability to learn how to learn may be the most durable skill of all.
Possibility Thinking, in Practice
If our current moment is a reckoning, then our response must be one of responsibility. We cannot guarantee our students a particular future. But we can offer them the tools to shape one.
Beghetto calls this “agentic awareness” — a belief in one’s ability to influence outcomes. It’s a curriculum and a posture. And it’s something we can model by how we teach: with creativity, clarity, and curiosity.
So the next time you see another headline about higher ed’s collapse, ask yourself: What if we treated this as a moment to reimagine rather than as a crisis to survive?
That’s resilience, and it’s possibility thinking in action.
Three Small Shifts You Can Make This Semester
Now is the perfect time to start leaning into the possibility of our problems. To do so, try:
Redesigning one assignmentto reflect your students’ actual career goals or lived experiences. Meeting students where they are will help them better envision where they’re headed.
Asking your classroom technology better questions. Push beyond features to real learning outcomes when you choose to invite EdTech into your classroom.
Making reflection part of the grade. Don’t treat it as busywork but as weighted, important meaning-making.
Higher education may be facing unprecedented disruption, but disruption doesn’t have to mean decline. In fact, the classroom may be one of the last places where we still have real influence over what comes next. Each lesson we design, each conversation we facilitate, each moment we create for reflection — these are acts of future-building.
Educating for unknowable futures doesn’t mean we need to predict what’s next. It means we help students learn to ask better questions, adapt with confidence, and recognize their own capacity to shape change. And it means we embrace that same mindset ourselves.
The future of higher education won’t be saved by sweeping reforms or silver-bullet technologies. It will be co-created — one thoughtful assignment, one intentional choice, one student at a time. And that work starts not in distant policy meetings, but right here, in our classrooms.
Laura Nicole Miller, DET, is an assistant professor in the Grenon School of Business at Assumption University, where she teaches organizational communication, marketing, and management. A first-generation college graduate and former EdTech executive, she studies how communication practices shape equity, trust, and student success in high-stakes environments.
Craft, A. (2015). Possibility thinking: From what is to what might be. In R. Wegerif, L. Li, & J. C. Kaufman (Eds.), The Routledge international handbook of research on teaching thinking (pp. 15–26). Routledge. https://doi.org/10.4324/9781315797021
Miller, L. N. (2025). “D-III students deserve better”: strategic communication with college stakeholders. The CASE Journal, 21(3), 493-516. https://doi.org/10.1108/TCJ-06-2024-0184
“Colleges are now closing at a pace of one per week. What happens to the students?” Jon Marcus asked in a recent Hechinger Report piece. It’s not a rhetorical question — and it doesn’t have an easy answer. As educators, we’ve read the headlines, seen the numbers, and felt the pressure. Undergraduate enrollment is down. Student confidence is eroding. The enrollment cliff looms.
But instead of asking when higher education will fail, we might ask: What if this is a market correction — not a collapse? What if the problem isn’t higher education itself, but how we’ve framed its value and how we’ve taught? What if this moment is less an ending and more a beginning?
In the face of uncertainty, it’s tempting to focus on control: measurable learning outcomes, career-ready skills, standardized assessments. But today’s students are entering a competitive job market — and a world defined by accelerating change, emerging technologies, and challenges we haven’t yet named. That means our teaching needs to prepare them what’s likely and for what’s possible — and even what’s unknowable.
Ronald Beghetto’s framework for “educating for unknowable futures” offers a helpful lens. He proposes three levels of preparation:
Educating for likely futures: equipping students with foundational skills and durable knowledge.
Educating for possible futures: helping students build agency, creativity, and adaptability.
Educating for unknowable futures: inviting students to grapple with uncertainty through reflection and imagination.
Each level requires a shift in how we think about learning and a new set of pedagogical commitments.
1. Educating for Likely Futures: Redesigning Assignments Around Students’ Real Lives
Career readiness remains a core concern. But often, our tools for building it are misaligned with students’ actual experiences. Take the classic business case method, for example: many cases center Fortune 500 CEOs or global crises, which can feel abstract or inaccessible to undergraduates, especially first-generation students.
That’s why I now write my own cases: short, specific, and grounded in contexts my students know. In one recent one, I explored a conflict between student-athletes and faculty at a nearby Division III college. For my mostly student-athlete class, this was familiar and therefore grounding. Their analysis shifted. So did their engagement.
Designing assignments that reflect students’ likely futures — their majors, their industries, their regions — signals that their lives are valid sites of learning. It builds relevance. And it reminds them that professional decision-making doesn’t start “out there.” It starts here.
2. Educating for Possible Futures: Using EdTech with Purpose
Students also need to develop adaptive skills: how to think critically, navigate ambiguity, and evaluate tools in evolving environments. EdTech is a perfect place to practice this.
Today’s education market is flooded with tools — over 370 vendors across over 40 market segments, according to Encoura. But quantity isn’t quality. Too often, we adopt tools based on novelty or institutional trends rather than instructional value.
To support students in building discernment, we must model it ourselves. That means asking: Does this tool solve a real problem in my class? Does it deliver on its promises? Does it support learning equitably and sustainably?
In other words, we must shift from passive adopters to intentional evaluators and invite students into that evaluative process. Helping them think about how technology shapes learning (and their own agency within it) equips them for any environment, not just the one we’ve built.
3. Educating for Unknowable Futures: Making Space for Reflection
Preparing students for the truly unknown requires something more radical: making space for performance, yes, but also for reflection.
In a recent MBA course on negotiation and conflict, I made a bold move: I assigned weekly reflection journals — raw, stream-of-consciousness entries that linked course themes to students’ lived experiences. Some students resisted at first. But by the end, many said it changed the way they approached class and life.
Reflection is often treated as an add-on, something optional or “soft.” But it’s essential. It helps students surface assumptions, interrogate choices, and practice metacognition. And in a world where knowledge and skills are constantly evolving, the ability to learn how to learn may be the most durable skill of all.
Possibility Thinking, in Practice
If our current moment is a reckoning, then our response must be one of responsibility. We cannot guarantee our students a particular future. But we can offer them the tools to shape one.
Beghetto calls this “agentic awareness” — a belief in one’s ability to influence outcomes. It’s a curriculum and a posture. And it’s something we can model by how we teach: with creativity, clarity, and curiosity.
So the next time you see another headline about higher ed’s collapse, ask yourself: What if we treated this as a moment to reimagine rather than as a crisis to survive?
That’s resilience, and it’s possibility thinking in action.
Three Small Shifts You Can Make This Semester
Now is the perfect time to start leaning into the possibility of our problems. To do so, try:
Redesigning one assignmentto reflect your students’ actual career goals or lived experiences. Meeting students where they are will help them better envision where they’re headed.
Asking your classroom technology better questions. Push beyond features to real learning outcomes when you choose to invite EdTech into your classroom.
Making reflection part of the grade. Don’t treat it as busywork but as weighted, important meaning-making.
Higher education may be facing unprecedented disruption, but disruption doesn’t have to mean decline. In fact, the classroom may be one of the last places where we still have real influence over what comes next. Each lesson we design, each conversation we facilitate, each moment we create for reflection — these are acts of future-building.
Educating for unknowable futures doesn’t mean we need to predict what’s next. It means we help students learn to ask better questions, adapt with confidence, and recognize their own capacity to shape change. And it means we embrace that same mindset ourselves.
The future of higher education won’t be saved by sweeping reforms or silver-bullet technologies. It will be co-created — one thoughtful assignment, one intentional choice, one student at a time. And that work starts not in distant policy meetings, but right here, in our classrooms.
Laura Nicole Miller, DET, is an assistant professor in the Grenon School of Business at Assumption University, where she teaches organizational communication, marketing, and management. A first-generation college graduate and former EdTech executive, she studies how communication practices shape equity, trust, and student success in high-stakes environments.
Craft, A. (2015). Possibility thinking: From what is to what might be. In R. Wegerif, L. Li, & J. C. Kaufman (Eds.), The Routledge international handbook of research on teaching thinking (pp. 15–26). Routledge. https://doi.org/10.4324/9781315797021
Miller, L. N. (2025). “D-III students deserve better”: strategic communication with college stakeholders. The CASE Journal, 21(3), 493-516. https://doi.org/10.1108/TCJ-06-2024-0184
This post was kindly written by Vincent Everett, who is head of languages in a comprehensive school and sixth form in Norfolk. He blogs as The Nice Man Who Teaches Languages at https://whoteacheslanguages.blogspot.com.
We have to bring an end to the Culture Wars in “Modern Foreign Languages” in England. Since 2019 we have been convulsed in an internecine political fight over whether our subject is about Communication or Intellectual Conceptualisation. Of course, it’s both. The same goes for Literature, Linguistics, Content Integrated Language Learning (CLIL), and Culture. Likewise, we can encompass transactional travel language, personal expression, professional proficiency, creative or academic language. Teachers have all of these on their radar, and make decisions on how to select and integrate them on a daily basis.
Our subject benefits from the richness of all these ingredients, and to privilege one or to exclude others, is to make us all the poorer. Teachers work in the rich and messy overlap between Grammar and Communication, engaging with pupils at every stage through their encounters with and progression through another language.
The narrative that it is harder to succeed in languages is accurate. Not because of the difficulty of the course content or the exams, but because of the determination of the allocation of grades. It’s not accurate to say that this is a reflection of pupils’ progress or the quality of teaching compared to other subjects. That calibration has not been made. In fact, grades are not calibrated one subject to another. The only calibration that is made, is to perpetuate grading within the subject year on year.
This was most famously set up in advance when we moved to a new GCSE in 2018. The unfair grading of the old GCSE was carefully and deliberately transferred across to the new GCSE. So pupils taking the new course and the new exam, even though it was proposed to be a better course and a better exam, had no chance of showing they could get better grades. Furthermore, where under the old A-G grading system, the difference between languages and other subjects had been around half a grade, the new 9-1 grading meant that the difference in the key area of grades 4 and above, was now stretched to a whole grade, because of the way the old grades were mapped onto the new ones.
The lower grades given out in languages are a strong disincentive for take up at GCSE. There is the accurate narrative that pupils will score a lower grade if they pick languages, which acts as a deterrent not only for pupils, but also for schools. One way to score higher in league tables is to have fewer pupils taking MFL. There is also the inaccurate narrative that this is a reflection of the pupils’ own ability, the nation’s ability, or the quality of teaching. The allocation of grades is a historical anomaly perpetuated year-on-year, not a reflection of actual achievement.
This is the biggest issue facing modern languages. It would also be the easiest to fix. Grade boundaries in other subjects are used in order to bring standards in to line. If an exam is too easy or too hard, and many pupils score a high mark or a low mark, the grade boundaries are used to make sure the correct number of pupils get the grade. Except, that is, in modern languages, where the thresholds are used to make sure that grades are out of line with other subjects. Imagine if languages grades were allocated in line with other subjects, would there be a clamour of voices insisting they should be made more difficult?
There is a very real danger of misinterpreting this manufactured narrative of “failure” in languages. It features in every report or proposal, but often instead of identifying it as an artificial anomaly, it is used to diagnose a deficit and prescribe a solution. Often this is a solution taken from the culture wars, ignoring the fact that schools and teachers are already expertly blending and balancing the elements of our subject.
Unfair grading at GCSE is the greatest of our problems, and the easiest to sort out. In Part 2, I shall look at the trickier question of what happens post-16.
The higher education sector had high hopes of a new government last July. Early messaging from ministers suggested that they were justified. The Guardian quoted Peter Kyle, the Science Secretary, declaring an ‘end to the war on universities’. Speaking to the Commons in September 2024, the Education Secretary Bridget Phillipson said that ‘the last Government ..use[d] our world-leading sector as a political football, talking down institutions and watching on as the situation became…desperate. I [want to]…return universities to being the engines of growth and opportunity‘. In November, she announced a rise – albeit for just one year in the first instance – in the undergraduate tuition fee, with the prospect of alleviating pressure on higher education budgets.
Ten months on, the hopes look tarnished as financial, political and policy challenges mount. The scale of the higher education funding challenge is deepening, it seems, by the week. The OfS has reported that four in ten universities will report a deficit this year. Restructuring programmes are underway in scores of universities, with some institutions on their second, third or even fourth round of savings. The post-study graduate visa, an important lifeline for international student recruitment, appears to be under threat.
Policy direction appears to be unclear. The English higher education sector is still largely shaped by the coalition government’s policy decisions between 2010 and 2015. Its key design principles include uncapped student demand since number controls were abolished in 2013, assumed cross-subsidies across and between activity streams allowing for institutional flexibility, access to private capital markets since HEFCE capital funding was removed in 2011, diverse missions but largely homogenous delivery models based around traditional terms and full-time, three-year undergraduate provision, and jealously protected institutional autonomy. Familiar though these principles are in higher education policy, some are in truth relatively recent, and are creating tensions between what the nation wants from its university system, what universities can offer and what the government and others are willing to pay for.
Moreover, the sector we have in 2025 is not the sector which the 2017 Higher Education and Reform Act (HERA) envisaged: HERA was expected to significantly re-shape the sector. The government’s impact assessment of HERA suggested that there would be in the order of 800 HE providers by the mid-2020s. This did not happen, though the impact of private capital, often channelled through established institutions and now rapidly growing for-profit providers, should not be underestimated as a longer-term transformative force in the sector.
We are expecting both a three-year comprehensive spending review and a post-16 White Paper in a couple of months’ time. In my 2024 HEPI paper, ’Four Futures’, I sketched out possible scenarios for a sector facing intense challenges. The near-frozen undergraduate fee was reducing the unit of resource for undergraduate teaching as costs rose. Undergraduate demand seemed to be softening amongst (especially) disadvantaged eighteen-year-olds. International student demand remains volatile and subject to political change in visa regulations. The structural deficit on research funding deepened. ‘Four Futures’ outlined four scenarios, summarised in Table 1.
Of course, we all want a mixture of cost control, thriving universities, regional growth and research excellence, but it is difficult to have all of them. Governments and universities set priorities based on limited resources, so there are choices to be made and trade-offs to be confronted for both policymakers and institutional leaders.
Government needs to make decisions about universities in the context of competing and changing policy imperatives. It needs to balance restoring government finances, allocating resources to other needy sectors, securing economic growth, and, more obviously important than a year ago, protecting sovereign intellectual property assets and growing defence-related R&D. The Secretary of State’s letter to Vice-Chancellors in the Autumn identified growth, engagement with place, teaching excellence, widening participation and securing efficiencies, but did not unpick the tensions between them. That depends on articulating a stronger vision for higher education given the Government’s priorities and resources and the economic challenges facing institutions, and it is a task for the forthcoming White Paper.
But there are urgent choices too for institutions, and those need to be made quickly in many universities. Institutional and sector efficiencies are vital, and a key theme of the UUK Carrington Review, but they need to be considered in the light of sustainable operating models for both academic delivery and professional services. Institutions need a clearly articulated value proposition, communicated strongly and effectively and capable of driving the operating model. In the past, too many universities have tried to do too many things – and with resources scarce, the choices cannot be ducked. That means there is a consideration which links the choices facing government and those facing individual institutions. If a core strength of the English system lies in its diversity and its distributed excellence, individual institutions need to think about their place in, and responsibilities to, the wider HE system. For a sector characterised by intense competition, that is a profound cultural shift, notwithstanding the economic and legal challenges of collaboration.
The higher education sector now is not the sector we have always had, and therefore it won’t be the sector we always have. How the sector collectively, and institutions individually, confront choices is a test for policymakers and institutional leaders.
I write this post to e-Literate readers, Empirical Educator Project (EEP) participants, and 1EdTech members. You should know each other. But you don’t. We should all be working on solving problems together. But we aren’t.
Not yet, anyway. Now that EEP is part of 1EdTech, I’m writing to ask you to come together at our Learning Impact conference in Indianapolis, the first week in June, to take on this work together.
1EdTech has the potential to enable a massive learning impact because we have proven that we can change the way the entire EdTech ecosystem works together. (I recently posted a dialogue with Anthropic Claude about this topic.) I highlight the word “potential” because, as a community-driven organization, we only take on the challenges that the community decides to take on together. And the 1EdTech community has not had many e-Literate readers and EEP participants who can help us identify the most impactful challenges we could take on together.
On the morning of Monday, June 2nd, we’ll have an EEP mini-conference. For those of you who have been to EEP before, the general idea will be familiar but the emphasis will be different. EEP didn’t have a strong engine to drive change. 1EdTech does. So the EEP mini-conference will be a series of talks in which the speakers propose ideas about what the 1EdTech should be working on, based on its learning impact. If you want to come just for the day, you can register for the mini-conference for $350 and participate in the opening events as well. But I invite you to register for the full conference. If you scan the agenda, you’ll see sessions throughout the conference that will interest e-Literate readers and EEP participants.
EEP will become Learning Impact Labs
We’re building something bigger. Nesting EEP inside Learning Impact is just a start. Our larger goal is to create an umbrella of educational impact-focused proposals for work that 1EdTech can take on now and a series of exploratory projects for us to understand work that we may want to take on soon. You may recall my AI Learning Design Assistant (ALDA) project, for example. That experiment now lives inside 1EdTech. As a community, we will be working to become more proactive, anticipating needs and opportunities that are directly driven by our collective understanding of what works, what is needed, and what is coming. We will have ideas. But we need yours.
Come. Join us. If you’ve been a fellow traveler with me but haven’t seen a place for you at 1EdTech, I want you to know we have a seat with your name on it. If you’re a 1EdTech member who has colleagues more focused on the education (or the EdTech product design) side, let them know they can have a voice in 1EdTech.
In my classroom, I frequently encounter students expressing their opinions: “How is this relevant to the real world?” or “Why should I care? I will never use this.” This highlights the need for educators to emphasize real-world applications across all subjects.
As an educator, I consistently strive to illustrate the practical applications of geography beyond the classroom walls. By incorporating real-world experiences and addressing problems, I aim to engage students and encourage them to devise solutions to these challenges. For instance, when discussing natural resources in geography, I pose a thought-provoking question: “What is something you cannot live without?” As students investigate everyday items, I emphasize that most of these products originate from nature at some point, prompting a discussion on the “true cost” of these goods.
Throughout the unit, I invite a guest speaker who shares insights about their job duties and provides information related to environmental issues. This interaction helps students connect the dots, understanding that the products they use have origins in distant places, such as the Amazon rainforest. Despite it being thousands of miles away, I challenge students to consider why they should care.
As students engage in a simulation of the rainforest, they begin to comprehend the alarming reality of its destruction, driven by the increasing demand for precious resources such as medicines, fruits, and beef. By the conclusion of the unit, students will participate in a debate, utilizing their research skills to argue for or against deforestation, exploring its implications for resources and products in relation to their daily lives. This approach not only enhances their understanding of geography but also creates a real-world connection that fosters a sense of responsibility toward the environment.
Creating a foundation to build upon
Engaging in academic discussions and navigating through academic content is essential for fostering a critical thinking mentality among students. However, it is often observed that this learning does not progress to deeper levels of thought. Establishing a solid foundation is crucial before advancing toward more meaningful and complex ideas.
For instance, in our geography unit on urban sprawl, we start by understanding the various components related to urban sprawl. As we delve into the topic, I emphasize the importance of connecting our lessons to the local community. I pose the question: How can we identify an issue within the town of Lexington and address it while ensuring we do not contribute to urban sprawl? Without a comprehensive foundation, students struggle to elevate their thinking to more sophisticated levels. Therefore, it is imperative to build this groundwork to enable students to engage in higher-order thinking effectively.
Interdisciplinary approaches
Incorporating an interdisciplinary approach can significantly enrich the learning process for students. When students recognize the connections between different subjects, they gain a deeper appreciation for the relevance of their education. According to Moser et. al (2019), “Integrative teaching benefits middle-level learners as it potentially increases student engagement, motivation, and achievement. It provides learners with the opportunity to synthesize knowledge by exploring topics and ideas through multiple lenses.” This method emphasizes the importance of making meaningful connections that deepen students’ comprehension. As they engage with the content from different perspectives, students will apply their learning in real-world contexts.
For instance, principles from science can be linked to literature they are studying in English class. Similarly, concepts from physics can be applied to understand advancements in medical studies. By fostering these connections, students are encouraged to think critically and appreciate the interrelated nature of knowledge.
Incorporating technology within classrooms
In today’s digital world, where technology is readily accessible, it is crucial for classroom learning to align with current technological trends and innovations. Educators who do not incorporate technology into their teaching practices are missing an opportunity to enhance student learning experiences. In my class, I have students explore their designated area using Google Earth, which we previously outlined. Each student selected a specific region to concentrate on during their analysis. This process involves identifying areas that require improvement and discussing how it can benefit the community. Additionally, we examine how these changes can help limit urban sprawl and reduce traffic congestion.
We have moved beyond the era of relying solely on paper copies and worksheets; the focus now is on adapting to change and providing the best opportunities for students to express themselves and expand their knowledge. As Levin & Wadmany (2014) observe, “some teachers find that technology encourages greater student-centeredness, greater openness toward multiple perspectives on problems, and greater willingness to experiment in their teaching.” This highlights the necessity for teachers to evolve into facilitators of learning, acting as guides who support students taking ownership of their learning.
Strategies for implementation
1. Start with the “why”: Teachers should critically consider the significance of their instructional approaches: Why is this method or content essential for students’ learning? Having a clear vision of the desired learning outcomes enables educators plan effectively and what instructional strategies to use. This intentionality is crucial.
2. Use authentic materials: Incorporating meaningful text that involves real-world concepts can significantly enhance students’ engagement. For instance, in social studies class discussing renewable energy can lead to academic discussion or projects where students research about local initiatives in their community.
3. Promote critical thinking: Encourage students to engage in critical thinking by asking open-ended questions, creating opportunities for debates to challenge their ideas, and urging them to articulate and defend their viewpoints.
4. Encourage collaboration: Students excel in collaborative learning environment, such as group projects and peer reviews where they can engage with their classmates. These activities allow them to learn from each other and view different perspectives.
5. Provide ongoing feedback: Providing constructive feedback is essential for helping students identify their strengths and areas for improvements. By having planned check-ins, teachers can tailor their instruction to ensure that they are meeting the academic needs of individual students.
References
Levin, T., & Wadmany, R. (2006). Teachers’ Beliefs and Practices in Technology-based Classrooms: A Developmental View. Journal of Research on Technology in Education, 39(2), 157–181. https://doi.org/10.1080/15391523.2006.10782478
Moser, K. M., Ivy, J., & Hopper, P. F. (2019). Rethinking content teaching at the middle level: An interdisciplinary approach. Middle School Journal, 50(2), 17–27. https://doi.org/10.1080/00940771.2019.1576579
Skyler Stoll, Middle School Teacher, South Carolina
Skyler Stoll is a graduate student at the University of South Carolina and is a 7th grade social studies teacher in South Carolina.
Latest posts by eSchool Media Contributors (see all)
AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country. Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.
On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.
Constitutional background: Watermarking and other compelled disclosure of AI use
We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software.
Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.
To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”
There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of uncontroversial, purely factual information to prevent consumer deception. (For example, under this principle, the D.C. Circuit allowed federal regulators to require disclosure of country-of-origin information about meat products.)
But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.
AI disclosure and watermarking bills
This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s HB 1170, Massachusetts’s HD 1861, New York’s SB 934, and Texas’s SB 668.
At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer.
For example, FIRE recently testified against Washington’s HB 1170, which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.
Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.
Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like.
In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.
A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.
Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception would be easily abused to suppress dissent and criticism.
There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.
Misinformation versus disinformation, explained
Issue Pages
Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.
Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.
Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.
AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of Mark Zuckerberg discussing his control over user data — and for parody, as seen in the Donald Trump pizza commercial and the TikTok account dedicated to satirizing Tom Cruise. In the realm of art and entertainment, the Dalí Museum used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young Luke Skywalker. Deepfakes have even been used for education and outreach, with a deepfake of David Beckham raising awareness about malaria.
These examples should not be taken to suggest that AI is always a positive force for shaping public discourse. It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.
Categorical deepfake prohibition bills
Bills with categorical deepfake prohibitions include North Dakota’s HB 1320 and Kentucky’s HB 21.
North Dakota’s HB 1320, a failed bill that FIRE opposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.
This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?
Artificial intelligence, free speech, and the First Amendment
Issue Pages
FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.
HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.
Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.
HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.
Constitutional background: Election-related AI regulations
Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.
Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”
Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.
As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.
This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.
Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.
Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and blocked a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.
Election-related AI bills
Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s SB 2, Arkansas’s HB 1041, Illinois’s SB 150, Maryland’s HB 525, Massachusetts’s HD 3373, Mississippi’s SB 2642, Missouri’s HB 673, Montana’s SB 25, Nebraska’s LB 615, New York’s A 235, South Carolina’s H 3517, Vermont’s S 23, and Virginia’s SB 775.
For example, S 23, a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”
Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season.
Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.
Instead of voter outreach, political campaigning would turn into lawfare.
Concluding Thoughts
That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!
I’ve whined about bad infographics and I try to avoid complaining about their continuing proliferation. But I can’t bite my tongue about this ACPA infographic purporting to show information about technology usage by undergraduate students. It’s bad not just because it’s misrepresenting information but because it’s doing so in the specific context of making a call for quality research and leadership in higher education.
There are some serious problems with the layout and structure of the infographic but let’s focus on the larger issues of data quality and (mis)representation. I’ve labeled the three major sections of this infographic in the image to the right and I’ll use those numbers below to discuss each section.
Before I dive into the specific sections, however, I have to ask: Why aren’t the sources cited on the infographic? They’re listed on the ACPA president’s blog post (and perhaps other places) but it’s perplexing that the authors of this document didn’t think it important to credit their sources in their image.
Section 1: Student use of technology in social interactions and on mobile devices
The primary problem with this section is that uses this Noel-Lovitz report as its sole source of information and generalizes was beyond the bounds of that source. The report is based on a phone survey of “2,018 college-bound high school juniors and seniors (p. 2)” but that limitation is completely lost in this infographic. If this infographic is supposed to be about all U.S. undergraduate students, it’s inappriopriate to generalize from a survey of high school students and misleading to project their behaviors and desires directly onto undergraduate students. For example, just over half (51.1%) of all undergraduate students are 21 years old or younger (source) so it’s problematic to assume that the half of college students who are over 21 exhibit the same behaviors and desires as high school students.
I can’t help but also note just how bad the visual display of information is in the “social interactions” part of this infographic. The three proportionally-sized rectangles placed immediately next to one another make the entire thing appear to be one horizontal stacked bar when in fact they are three independent values unrelated to one another. This is very misleading!
Section 2: Cyberbullying
It’s laudable to include information about a specific use of technology that is harmful for many students but like the first section this information is inappropriately and irresponsibly generalizing from a small survey to a large population. In this instance, 276 responses to a survey of students at one university are being presented as representative of all students. Further, the one journal article cited as the source for these data doesn’t provide very much information about the survey used to gather these data so we don’t even have many reassurances about the quality of these 276 responses. And although response rate isn’t the only indicator of data quality we should use to evaluate survey data, this particular survey only had a 1.6% response rate which is quite worrying and makes me wonder if the data are even representative of the students at that one university.
Section 3: Information-seeking
The third section of this infographic is well-labeled and uses a high quality source. I’m not sure how useful it is to present information about high school students in AP classes if we’re interested in the broader undergraduate population but at least the infographic correctly labels the data so we can make that judgement ourselves. In fact, the impeccable source and labels used in this section make the problems in other two sections even more perplexing.
This is all very frustrating given the context of the image in the ACPA president’s blog post that explicitly calls for ACPA to “advance the application of digital technology in student affairs scholarship and practice and to further enhance ACPA’s digital stamp and its role as a leader in higher education in the information age.” Given that context, I don’t what to make of the problems with this infographic. Is this just a sloppy image hurriedly put together by one or two people who made some embarassing errors in judgement? Or does this reveal some larger problems with how some student affairs professionals locate, apply, and reference research?*
* I bet that one problem is that many U.S. college and university administrators, including those in student affairs, automatically think of “college student” as meaning “young undergraduate student at 4-year non-profit college or university.” It’s completely natrual that we all tend to focus on the students on our campuses but when discussing the larger context – such as when working on a task force in an international professional organization that includes members from all sectors of higher education – those assumptions need to at least be made clear if not completely set aside. In other words, it’s somewhat understandable if the authors of this image only work with younger students at 4-year institutions because then some of their generalizations make some sense. They’re still inappropriate and indefensible generalizations, however, but they’re at least understandable.