To help you overcome this hurdle, we asked our Faculty Partners across disciplines, including English, psychology and finance to share some of the creative assignments they’ve come up with to keep class both educational and fun for students. Here are three innovative ideas you should take note of.
1. Social media profiles of characters
Do you ever wonder what your favorite literary character’s social media profile would look like if they had one? Kerry L. Frabizio, Associate Professor of English at Warren County Community College, told us about a unique group project she introduced to her English Composition students. It lets them develop social media personalities based on characters from some of the most iconic plays in American history.
Project instructions:Each group will create a fictional but realistic social media presence for each assigned character. Be creative, but stay true to the character’s attitude, motivations and story arc.
Required components: Students must produce the following for each of their assigned characters:
Social Media Platform
Handle/Username
Profile Picture
Occupation or Life Role
Favorite Hashtags
Followers/Following (Optional)
One main post
This entertaining assignment reframes course content in a way that’s relatable and engaging for today’s tech-driven students.
2. Music video discussion
You might know the lyrics to your favorite song by heart, but have you considered the deeper meaning or significance behind them? Marc Wilson, Ph.D. Professor and Director of Graduate Psychology Programs at Fisher College, has transformed that concept into a media-based assignment for his psychology students.
Each week, students watch a music video relevant to one or more of their course topics. They’re asked to respond to the video, discussing how they think the lyrics or video relate to one of the disorders they’re studying that week.
Additionally, students are asked to find and share media that they think reflects that week’s course material. This assignment not only helps students develop critical thinking and analysis skills, but also gives them an opportunity to connect with topics in a meaningful way.
3. Discussion forum
If you’re looking for a creative assignment in finance, consider this discussion forum activity, courtesy of Ann Snell, Instructor, Business Administration at Alamance Community College. This activity gets students thinking critically about the role finance plays in their everyday lives, and the world around them. Each week, students receive engaging prompts based on their course content. They’re asked to reflect on questions like:
What’s one financial goal you care about? This could be saving for a trip, paying off student loans, starting a business, or investing in your future. Why is it important to you?
What’s something that represents your “money personality”? Maybe it’s your favorite budgeting app, a piggy bank, a vintage coin or even your go-to coffee splurge — anything that reflects your style with money.
If you could work anywhere in the world, where would it be — and why?
Students are allowed to record their answers on their phones or webcams. They can even use props to liven up their presentations.
These types of assignments get students actively engaged with their course content in. By tapping into their sense of creativity and imagination, they can grow into confident learners built for future success.
Inclusivity and feelings of psychological safety in the classroom should not be reserved solely for K-12 learning environments. Students in higher education also deserve nurturing spaces that focus on utilizing their personalized strengths and needs to foster increased academic and social development. A costly misconception is that pedagogical approaches in higher education must be lecture-based and teacher-led. Instead, it is an educator’s responsibility to design cooperative learning structures that facilitate comfort and collaboration regardless of grade level. These practical techniques will develop critical thinking and problem-solving abilities by creating intentional opportunities to enhance social and communication skills. The fundamental question becomes: How can educators plan for and deliver instructional content that is relatable and meaningful?
According to a recent survey by Salesforce (2022), just 12% of college students felt a strong sense of belonging in their institutions. Feelings of disconnection from their peers and institutions persisted after students returned to campus from remote learning during the COVID-19 pandemic. To address this dismal response to education, we must promote inclusive and respectful learning environments that foster engagement, motivation, and active discourse.
Purposeful Application to Teamwork
A practical and successful strategy is to design performance tasks using cooperative learning structures deliberately. Aside from promoting positive interdependence and individual accountability, students participate equally through simultaneous interactions. These actionable, collaborative strategies help educators navigate their role as facilitators, guiding and responding rather than lecturing and directing.
In an undergraduate education course, I establish a positive tone by holding morning meetings at the start of every class. The intention is to incorporate short, structured routines that foster community, address social-emotional needs, and transition students to the day’s lesson. Purposefully, I provide opportunities for students to connect and practice the social skills necessary to participate in rigorous critical thinking and problem-solving tasks. The goal is to elicit positive peer interactions and foster a sense of belonging and trust where students consider multiple perspectives and actively engage in an inclusive and receptive learning environment.
To enrich their classroom experience, I align cooperative learning structures with the lesson outcomes, ensuring that 21st-century, real-world content tasks are purposeful and authentic. Motivated by intentional opportunities for collaboration, students actively process information in favorable conditions that are both supportive and stimulating.
Unlike group work, cooperative learning activities are highly structured, with defined roles, steps, and time limits. With transparent directions, every student knows exactly what to do. Another key difference is that these structures promote built-in accountability and equal participation, requiring everyone to contribute through regular interaction and processing.
When considering a teacher’s pedagogical content knowledge, eliciting student feedback is paramount. Responding to end of course surveys, student reflections highlight the importance of the affective domain through several emerging pivotal themes. Students emphasized the significance of engaging, inclusive, and supportive learning environments. This aligns with findings from the literature, which posits that allowing students to apply course content in innovative and engaging ways fosters a deeper understanding and a heightened sense of ownership in learning (Singha & Singha, 2024). Given these findings, creating a classroom community through collaboration increases motivation and encourages students to develop their critical thinking skills and analyze problems more effectively.
Some students conveyed appreciation for interactive, relational, and consistently structured learning experiences. One student communicated, “I felt heard, valued, and important in this class… everyone truly has the opportunity to openly share without judgment.” These emotional and interpersonal experiences support long-term learning and boost goal-directed behaviors.
Another key theme was the acknowledgment of active, collaborative, and applied learning activities. One student articulated the need for relatable, real-world, authentic tasks, stating, “My favorite activity was creating videos to connect with the assignments in class.” This reflection highlights the notion that students in higher education value opportunities that enable them to transition from passive consumers of information to active participants in the learning process (Ribeiro-Silva et al., 2022). Moreover, recent research supports the implementation of faculty professional development programs focused on active learning instruction and engaging students in college classrooms (Park & Xu, 2024).
Implications for Faculty
To design assignments that work and adopt a reflective framework in your own instruction, consider implementing the following practices:
Set Clear Learning Objectives: Ensure students understand what they are expected to accomplish during collaboration. Cooperative learning is most effective when aligned with specific outcomes, especially those that require higher-order thinking (analysis, synthesis, and application). Identify what students should know, do, or beabletoexplain by the end of the activity.
Design Purposeful and Flexible Groups: Keep it interesting and diversify student groups. Avoid simple tasks that can be done individually—design work that fosters interdependence and leverages diverse perspectives. Create tasks that are complex enough to require collaboration (e.g., solving a case study, evaluating evidence, or designing a project).
Structure Group Roles and Expectations: Assign roles based on instructional content to ensure equal participation and engagement among group members. Structured roles promote engagement and accountability. Rotate roles to ensure individual accountability and contribution.
Monitor and Support Groups in Real Time: Circulate during group work, check for understanding, and intervene as needed to guide discussion or clarify misconceptions. Guide communication, active listening, respectful disagreement, and group decision-making. Active monitoring shows students their work matters and allows faculty to model and scaffold skills often.
Encourage Metacognition: Provide opportunities for students to reflect on their group work experiences and the processes involved. Reflection reinforces academic content and collaboration skills.
By implementing collaborative techniques, faculty can motivate students to apply content knowledge that will prepare their future success in relatable, real-world contexts and encourage the development and maintenance of the interpersonal skills necessary for lifelong learning. Furthermore, by refining our pedagogical practices, we can foster a more robust society with confident, culturally competent, and self-aware leaders of tomorrow.
Final Reflections
Former U.S. Secretary of Education Miguel Cardona quoted the African proverb, “If you want to go fast, go alone. If you want to go far, go together.” University settings often overlook effective pedagogical approaches that incorporate collaboration, limiting opportunities for meaningful learning experiences that enhance understanding, sustain motivation, and spark intellectual curiosity. By designing, facilitating, monitoring, and evaluating inclusive learning spaces, we can contribute to a sense of belonging and bring joy back into the classroom.
Dr. Ana Figueroa is an assistant professor of education at the University of Tampa and the Lead Instructor of the Education Foundations and Human Exceptionalities courses in the undergraduate program. Her research interests include differentiated instruction, progress monitoring, teacher mindset, and highly effective instructional strategies. She champions equitable instruction for all learners.
References
Kagan, Spencer, and Miguel Kagan. Kagan Cooperative Learning. San Clemente, CA: Kagan Publishing, 2021.
Park, Elizabeth S., and Di Xu. “The Effect of Active Learning Professional Development Training on College Students’ Academic Outcomes.” Journal of Research on Educational Effectiveness 17, no. 1 (December 20, 2024): 43–64. https://doi.org/10.1080/19345747.2022.2151954.
Ribeiro-Silva, Elsa, Catarina Amorim, José Luis Aparicio-Herguedas, and Paula Batista. “Trends of Active Learning in Higher Education and Students’ Well-Being: A Literature Review.” Frontiers in Psychology 13 (April 18, 2022). https://doi.org/10.3389/fpsyg.2022.844236.
Singha, Ranjit, and Surjit Singha. “Educational Innovation Transforming Higher Education for Workforce Readiness.” Advances in Higher Education and Professional Development, January 29, 2024, 37–55. https://doi.org/10.4018/979-8-3693-0517-1.ch003.
Inclusivity and feelings of psychological safety in the classroom should not be reserved solely for K-12 learning environments. Students in higher education also deserve nurturing spaces that focus on utilizing their personalized strengths and needs to foster increased academic and social development. A costly misconception is that pedagogical approaches in higher education must be lecture-based and teacher-led. Instead, it is an educator’s responsibility to design cooperative learning structures that facilitate comfort and collaboration regardless of grade level. These practical techniques will develop critical thinking and problem-solving abilities by creating intentional opportunities to enhance social and communication skills. The fundamental question becomes: How can educators plan for and deliver instructional content that is relatable and meaningful?
According to a recent survey by Salesforce (2022), just 12% of college students felt a strong sense of belonging in their institutions. Feelings of disconnection from their peers and institutions persisted after students returned to campus from remote learning during the COVID-19 pandemic. To address this dismal response to education, we must promote inclusive and respectful learning environments that foster engagement, motivation, and active discourse.
Purposeful Application to Teamwork
A practical and successful strategy is to design performance tasks using cooperative learning structures deliberately. Aside from promoting positive interdependence and individual accountability, students participate equally through simultaneous interactions. These actionable, collaborative strategies help educators navigate their role as facilitators, guiding and responding rather than lecturing and directing.
In an undergraduate education course, I establish a positive tone by holding morning meetings at the start of every class. The intention is to incorporate short, structured routines that foster community, address social-emotional needs, and transition students to the day’s lesson. Purposefully, I provide opportunities for students to connect and practice the social skills necessary to participate in rigorous critical thinking and problem-solving tasks. The goal is to elicit positive peer interactions and foster a sense of belonging and trust where students consider multiple perspectives and actively engage in an inclusive and receptive learning environment.
To enrich their classroom experience, I align cooperative learning structures with the lesson outcomes, ensuring that 21st-century, real-world content tasks are purposeful and authentic. Motivated by intentional opportunities for collaboration, students actively process information in favorable conditions that are both supportive and stimulating.
Unlike group work, cooperative learning activities are highly structured, with defined roles, steps, and time limits. With transparent directions, every student knows exactly what to do. Another key difference is that these structures promote built-in accountability and equal participation, requiring everyone to contribute through regular interaction and processing.
When considering a teacher’s pedagogical content knowledge, eliciting student feedback is paramount. Responding to end of course surveys, student reflections highlight the importance of the affective domain through several emerging pivotal themes. Students emphasized the significance of engaging, inclusive, and supportive learning environments. This aligns with findings from the literature, which posits that allowing students to apply course content in innovative and engaging ways fosters a deeper understanding and a heightened sense of ownership in learning (Singha & Singha, 2024). Given these findings, creating a classroom community through collaboration increases motivation and encourages students to develop their critical thinking skills and analyze problems more effectively.
Some students conveyed appreciation for interactive, relational, and consistently structured learning experiences. One student communicated, “I felt heard, valued, and important in this class… everyone truly has the opportunity to openly share without judgment.” These emotional and interpersonal experiences support long-term learning and boost goal-directed behaviors.
Another key theme was the acknowledgment of active, collaborative, and applied learning activities. One student articulated the need for relatable, real-world, authentic tasks, stating, “My favorite activity was creating videos to connect with the assignments in class.” This reflection highlights the notion that students in higher education value opportunities that enable them to transition from passive consumers of information to active participants in the learning process (Ribeiro-Silva et al., 2022). Moreover, recent research supports the implementation of faculty professional development programs focused on active learning instruction and engaging students in college classrooms (Park & Xu, 2024).
Implications for Faculty
To design assignments that work and adopt a reflective framework in your own instruction, consider implementing the following practices:
Set Clear Learning Objectives: Ensure students understand what they are expected to accomplish during collaboration. Cooperative learning is most effective when aligned with specific outcomes, especially those that require higher-order thinking (analysis, synthesis, and application). Identify what students should know, do, or beabletoexplain by the end of the activity.
Design Purposeful and Flexible Groups: Keep it interesting and diversify student groups. Avoid simple tasks that can be done individually—design work that fosters interdependence and leverages diverse perspectives. Create tasks that are complex enough to require collaboration (e.g., solving a case study, evaluating evidence, or designing a project).
Structure Group Roles and Expectations: Assign roles based on instructional content to ensure equal participation and engagement among group members. Structured roles promote engagement and accountability. Rotate roles to ensure individual accountability and contribution.
Monitor and Support Groups in Real Time: Circulate during group work, check for understanding, and intervene as needed to guide discussion or clarify misconceptions. Guide communication, active listening, respectful disagreement, and group decision-making. Active monitoring shows students their work matters and allows faculty to model and scaffold skills often.
Encourage Metacognition: Provide opportunities for students to reflect on their group work experiences and the processes involved. Reflection reinforces academic content and collaboration skills.
By implementing collaborative techniques, faculty can motivate students to apply content knowledge that will prepare their future success in relatable, real-world contexts and encourage the development and maintenance of the interpersonal skills necessary for lifelong learning. Furthermore, by refining our pedagogical practices, we can foster a more robust society with confident, culturally competent, and self-aware leaders of tomorrow.
Final Reflections
Former U.S. Secretary of Education Miguel Cardona quoted the African proverb, “If you want to go fast, go alone. If you want to go far, go together.” University settings often overlook effective pedagogical approaches that incorporate collaboration, limiting opportunities for meaningful learning experiences that enhance understanding, sustain motivation, and spark intellectual curiosity. By designing, facilitating, monitoring, and evaluating inclusive learning spaces, we can contribute to a sense of belonging and bring joy back into the classroom.
Dr. Ana Figueroa is an assistant professor of education at the University of Tampa and the Lead Instructor of the Education Foundations and Human Exceptionalities courses in the undergraduate program. Her research interests include differentiated instruction, progress monitoring, teacher mindset, and highly effective instructional strategies. She champions equitable instruction for all learners.
References
Kagan, Spencer, and Miguel Kagan. Kagan Cooperative Learning. San Clemente, CA: Kagan Publishing, 2021.
Park, Elizabeth S., and Di Xu. “The Effect of Active Learning Professional Development Training on College Students’ Academic Outcomes.” Journal of Research on Educational Effectiveness 17, no. 1 (December 20, 2024): 43–64. https://doi.org/10.1080/19345747.2022.2151954.
Ribeiro-Silva, Elsa, Catarina Amorim, José Luis Aparicio-Herguedas, and Paula Batista. “Trends of Active Learning in Higher Education and Students’ Well-Being: A Literature Review.” Frontiers in Psychology 13 (April 18, 2022). https://doi.org/10.3389/fpsyg.2022.844236.
Singha, Ranjit, and Surjit Singha. “Educational Innovation Transforming Higher Education for Workforce Readiness.” Advances in Higher Education and Professional Development, January 29, 2024, 37–55. https://doi.org/10.4018/979-8-3693-0517-1.ch003.
I recently attended a student panel on use of AI in college classes. The three panelists shared their perspectives, borne out in an April 2025 Gallup survey: they want to learn to use AI tools in appropriate ways, they’re anxious about the potential negative impact on their critical thinking skills, and they want explicit guidance from their faculty members—which they’re not consistently getting.
Even, or especially, in an AI age, we know our students need to think critically and learn authentically to be productive and engaged citizens and future employees. We recognize we need to update assignments to promote academic integrity in an AI age, yet many of us feel unprepared to do so. The crux of the problem is this: how do we foster critical thinking and authentic learning when it’s so easy for students to outsource their cognitive work?
One promising solution to the triple challenge of fostering critical thinking, meaningful learning, and academic integrity is to double down on transparency. We can provide the guidance students want, embed analysis and evaluation into our assignments to get at that all-important critical thinking, and nudge students toward integrity. How? By embracing transparency.
Let’s detail our expectations for responsible AI use. Let’s emphasize the why: why we want students to use their own cognitive abilities for some tasks, why using AI could be helpful at times, and why we’ve crafted our AI-integrated assignments in the ways we have. Finally, let’s ask students to disclose their use in full transparency (and reflect on their learning, as Marc Watkins argues here).
In short, we can promote integrity when assessments are meaningful, purposeful, and include transparent instruction on appropriate AI use. We can teach students to critically analyze AI-assisted work and promote authentic learning as we do. To help us achieve these aims, here are five steps to update assignments in ways that help students learn the skills they need while fostering a culture of academic honesty.
Step 1: Take a critical look at your current syllabus.
If you’re like me, you have tried-and-true activities and assignments that you’ve relied on for many years. Some of these may be updateable, but others may just need to go. If AI can easily complete a task (try running your instructions through a tool like ChatGPT to find out), maybe it’s no longer a relevant measure of authentic learning. Also, following the remaining steps below will likely mean that you add new instructional practices (like modelling AI use) and new components of the assignments you update (like a reflection on how AI helped or hindered learning). Make time for these important new course elements by getting rid of outdated tasks.
Step 2: Consider whether and how students should use AI on the assignments you keep.
Remember, students want to know exactly what is appropriate for AI use in your class. A helpful tool for this process is the 5-level AI Assessment Scale (AIAS). The levels range from No AI, AI Planning, AI Collaboration, Full AI, and AI Exploration. Each one identifies and sanctions different ways students can use AI in appropriate and meaningful ways to support their learning.
Leon Furze, one of the creators of the AIAS, presents it a tool that helps update our assignments and start conversations with our students. He recommends using this framework to guide your thinking: break down each assignment, project, or discussion board activity into disparate steps, then decide for which parts of the process AI use is appropriate. Providing transparent guidance, guidance that may vary for different parts of the project, is what students need right now.
Step 3: Carve out time to discuss and model your expectations.
Many students are not sure what is acceptable in this current moment. What better way to help them feel confident while developing the AI skills they need than modelling what you’re looking for? Take class time or record a video for your online class to teach your students what you expect them to do with AI for each assignment, what not to do, and what you’ll be looking for in their finished product.
As you approach this aspect of instruction, take heart. I don’t think every instructor needs to be an AI expert, but I do think we’re doing our students a disservice if we’re not teaching them how to be AI Whisperers, a concept I gleaned from Ethan Mollick’s 2024 book, Co-Intelligence. A new kind of expertise is emerging, he argues: the skilled and effective use of AI in every field. As disciplinary experts, you know what kind of AI use will help students learn your material and what will disempower them as learners. Let’s allocate time to help students develop AI literacy while maintaining their agency.
Step 4: Ask students to disclose their AI use, as noted above.
The student panelistsshowed palpable relief when they described instructors who simply had them cite their use of AI. “Encourage your students to be transparent about their AI use,” one student, a finance major, said. As she talked about how simple it was to add a line at the end of the assignment such as, “ChatGPT generated the outline for this essay,” the anxiety that had previously clouded her expression disappeared.
One approach is to use the AI Disclosure (AID) framework to document how students used AI, or add an appendix to each assignment, or add comments or footnotes to make transparent what they wrote and what AI wrote. Get students to declare their AI use and even better, ask them to reflect on whether their use helped or hindered their learning.
Step 5: If you suspect inappropriate use of AI, don’t accuse students of cheating.
Instead, have a conversation with them. Recall that a primary goal of the AIAS is to facilitate discussions about AI use. Imagine that as a class, you agreed that Level 2 (AI Planning) was appropriate for a particular project. If a student submits work that seems to have been completed at Level 4 (Full AI), refer back to the framework to facilitate a conversation about your perception of their work and their process. As AIAS co-creator Leon Furze points out, now you can get away from wondering whether a student cheated with AI and instead ask, “can I make a valid judgement of this student’s capabilities [whether they have used AI or not]?”
And as argued by the authors of a new book titled The Opposite of Cheating: Teaching for Integrity in an Age of AI, this takes moral accusation out of the conversation, reduces the adversarial tone, and enables students to reflect on and hopefully learn from the experience. You can also decide together what actions should take place next, whether the student resubmits the assignment in accordance with your AI-usage guidance, whether they lose the opportunity to earn a grade for that assignment, or whatever else you and your student decide is appropriate in that situation.
Students want to know how to use AI well. “If we don’t teach students how to use AI responsibly, they’re going to be poorly equipped for the workforce,” said the third student panelist, a computer science major. Let’s update our assignments to facilitate meaningful learning of our content, the skills students need in an AI age, and ethical, integrous use of AI to equip them for the workplace they’ll enter. In doing so, we’re helping to develop engaged citizens who think critically about information they encounter. I can’t think of more important work to be engaged in at this moment in time.
Flower Darby is an associate director of the Teaching for Learning Center at the University of Missouri, a co-author of The Norton Guide to Equity-Minded Teaching (2023), and the lead author, with James M. Lang, of Small Teaching Online: Applying Learning Science in Online Classes (2019).
References
Bertram Gallant, Tricia, and David A. Rettinger. 2025. The Opposite of Cheating: Teaching for Integrity in the Age of AI. University of Oklahoma Press.
Joe Rogan is no fan of my work, obviously. The chart-topping conservative influencer famously insists that universities are “cult camps” where professors like me indoctrinate students with leftist ideas. Typically, I do not worry about my haters, but increasingly it seems that if I want to create a meaningful learning experience, I need to.
I teach first-year undergraduate humanities electives. Like most universities, ours offers large-format 200-student lectures for training in academic writing and critical theory. This would be the “indoctrination” in question, as I introduce students to canonical thinkers from Karl Marx to Sylvia Wynter. These electives are degree requirements, snaring students who might intentionally avoid liberal arts in an otherwise professional degree.
In the current political climate, many of my students come to the classroom with their minds made up based on authorities who directly undermine my scholarship and profession. Rogan is just one of many conservative anti-intellectuals who regularly attack liberal, feminist, social justice–oriented biases in university education. The result is a polarized atmosphere antithetical to learning: a tangibly mistrustful, sometimes even resentful classroom.
Although only a small handful of students typically adhere to anti-intellectual doctrine, their small group undermines my authority with risky jokes in the classroom and intense criticism in student back channels (as reported by concerned classmates). This causes undecided students to falter in their trust of my authority, while students who do not share their views nervously censor their contributions.
Ironically, my dissenting students often do not recognize that I am interested in their views. I am convinced that the way out of this explosive historic moment is through rigorous discussion in educational forums. Like any academic, this is why I teach: I love sincere inquiry, debate and critical engagement, and I was a rabble-rouser myself as a student. But the current classroom mood is less debate and more deadlock.
So, I spent this year brainstorming with my students to build creative assignments to spin resentment into passion, no matter how opposite my own, welcoming self-directed research and encouraging deeply engaged reading. I offer any one of these assignments, with the goal to bring disaffected, anxious students back to a love of learning and democratized engagement. This is a work in progress, and I welcome suggestions.
Turn Tensions Into Data: This introductory exercise eases students into an atmosphere of open collegial discussion. Surveys or anonymous polls quantify disagreements, and then we analyze the results as a class.
Example: Class Belief Inventory—anonymously poll students on hot-button questions (e.g., “Is systemic racism a major problem?”). The objective here would be to compare the class’s responses to national survey data. Potential discussion topics: Why might differences exist? What shapes our perceptions?
Hostile Influencers as Primary Sources: This in-class activity treats figures like Rogan or Jordan Peterson not as adversaries but as authors of texts to analyze, to disarm defensiveness and position students as critical investigators.
Example: “Compare/contrast an episode of [X podcast] with a peer-reviewed article on the same topic. How do their arguments differ in structure, evidence and rhetoric? Whom do you find more persuasive, and why?”
Gamifying Ideological Tensions: This class activity turns assigned readings into structured, rule-bound games where students must defend positions they don’t personally hold.
Example: Role-Play a Summit—Students are assigned roles (e.g., Jordan Peterson, bell hooks, climate scientist, TikTok influencer) and must collaborate to solve a fictional problem (e.g., redesigning a curriculum). They must cite course readings to justify their choices.
Therapy for Arguments: This fun early activity teaches students to diagnose weak arguments—whether from Rogan, a feminist theorist or you—using principles of logic.
Example: Argument Autopsy—Students dissect a viral social media post, podcast clip or course reading. Identify logical fallacies, cherry-picked evidence or unstated assumptions. Reward students for critiquing all sides.
Intellectual Sleuthing: This is a scaffolded midterm writing assignment building up to a final essay. Ask students to trace the origins of their favorite influencers’ ideas. Many anti-establishment figures borrow from (or distort) academic theories—show students how to connect the dots.
Example: Genealogy of an Idea—Pick a claim from a podcast (e.g., “universities indoctrinate students”). Research its history: When was this idea popular in mainstream news or on social media? Are there any institutes, think tanks, influencers or politicians associated with this idea? What are the stated missions and goals of those sources? Where do they get their funding? Which academics agree or disagree, and why?
Leverage “Forbidden Topics” as Case Studies: If students resent “liberal bias,” lean into it: make bias itself the subject of analysis. This might work as a discussion prompt for tutorials or think-pair-share group work.
Example: “Is This Reading Biased?”—Assign a short text students might call “woke” (e.g., feminist theory) and a countertext (e.g., Peterson’s critique of postmodernism). Have students evaluate both using a rubric: What counts as bias? Is objectivity possible? How do they define “truth”?
Choose-Your-Own-Adventure Assignments: The final essay assignment gives students agency to explore topics they care about, even if they critique my field. Clear guardrails are important here to ensure rigor.
Example: Passion Project: Students design a research question related to the course—even if it challenges the course’s assumptions. They must engage with three or more course texts and two or more outside sources, as in favorite influencers or authorities, even those who oppose course themes.
Red Team vs. Blue Team: For essays, students submit two versions: one arguing their personal view and one arguing the counterpoint. Grading is based on how well they engage evidence, not their stance.
Elisha Lim is an assistant professor of the technological humanities at York University inToronto.They used generative AI tools to assist with the editing of this piece.
While many of our conversations have focused on what generative AI means for student assignments and learning outcomes, there’s another question faculty are asking—often individually and quietly: How can we leverage AI in our own academic and administrative work? And more importantly, should we?
The answer, I believe, lies in using AI to help clear space for the work only we can do—the collaboration, connection, and critical guidance that makes education transformative.
That doesn’t mean that we simply use AI as a crutch for answering emails or summarizing meetings. In fact, I believe the true promise of AI comes from using it, in Ethan Mollick’s words, as a “genuine intellectual partner,” one that can enhance classroom discussions, assist with creating engaging instructional materials, and even help develop sophisticated problem sets or simulations that previously required extensive preparation time. As Mollick says, “the focus needs to move from task automation to capability augmentation.”
AI offers many potential applications for faculty work. While faculty should continue to prioritize the importance of maintaining human connection, empathy, and support in our teaching practice, we need to consider other ways AI can augment our work. Perhaps one way is in the design of our courses, the assignments and activities that chart student progress across content and outcomes. But rather than asking AI to develop prompts or notes for us, we can use AI as a tool to help develop our work in surprising ways.
Works in Theory, Wobbles in Practice
We’ve all fallen in love with that one key discussion question or written assignment prompt that just fizzles in the classroom. Despite our best intentions, we may not provide enough information, or we fail to anticipate a blind spot that leads students down fruitless paths. One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration.
Bridging this gap requires iterative refinement—recognizing that what works in theory or in controlled conditions needs real-world testing, adaptation, and continuous improvement. It’s not just about designing something that works in the lab but ensuring our designs are resilient, adaptable, and responsive enough to thrive in the wild.
While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see.
The Process: AI-Assisted Assignment Stress Testing
After experimenting with systems like Claude and ChatGPT, I’ve discovered they can effectively analyze and refine writing prompts through the creation of simulated student responses. The basic approach works like this. First, provide the AI with information about your course and key characteristics of your student population. Then, share the assignment prompt. The AI internally generates multiple simulated student responses across different skill levels. After, it provides a comprehensive analysis identifying potential issues and opportunities.
You might specify that the analysis include common misinterpretations students might make or any structural or organizational challenges in the prompt. But the AI can also identify content development patterns and potential issues as well as population-specific concerns based on your student demographics. Finally, the AI can even suggest refinements to the prompt.
Seeing What You’re Not Seeing
To test this approach, I uploaded a personal narrative prompt that asks students to connect their life experiences to their academic goals—a common assignment in first-year writing courses.
The AI analysis revealed several blind spots in my prompt design. For instance, I hadn’t considered how non-traditional students might struggle with “choice of major” language, since many are career-changers. The AI modeled responses also revealed that students might have difficulty transitioning between personal narrative and academic analysis sections. Most valuable was seeing how different student populations might interpret the same instructions. Career-changers might focus too heavily on work experiences, while others might struggle with how much personal information to share. These insights allowed me to add clarifying language and support materials before any real students encountered these challenges.
The entire process took about 30 minutes but potentially saved hours of student confusion and faculty clarification emails. Of course, AI responses aren’t identical to human student responses, and we should be cautious about viewing AI as an infallible expert or source of absolute truth. But used as an additional lens when developing assignments, this approach can grant course designers a different perspective, one that triggers valuable insights and potentially reduces workload.
This process allowed me to develop targeted support materials for predicted problem areas before students struggle, building proactive scaffolding into course design from the beginning. And by sharing insights gained through AI analysis, departments could collectively improve assignment design practices—particularly valuable for multi-section courses where consistency matters. Over time, we could build a practical library of “what works” that faculty could draw from, including analyses explaining why certain assignments succeed with particular student populations and learning objectives.
AI-assisted assignment analysis offers a promising tool that respects our expertise while expanding our ability to anticipate student needs. While the technology isn’t perfect and will never replace insights gained from direct student interaction, it provides a valuable perspective that helps identify blind spots before students encounter them. This represents just one way thoughtfully implemented AI can help us do more of what matters: creating meaningful learning experiences. By using AI for the predictive work of assignment design, we free more time and energy for the deeply human work of guiding and connecting with our students—the work that only we can do.
Dr. Nathan Pritts is a leader in higher education, specializing in faculty development, instructional innovation, and the integration of emerging technologies in teaching and learning. As Professor and Program Chair for First Year Writing at the University of Arizona Global Campus, he has spearheaded initiatives in strategic implementation of online learning technologies, comprehensive faculty training programs, and the creation of scalable interventions to support both faculty and students in online environments. As author and researcher, Dr. Pritts has published widely on topics including digital pedagogy, AI-enhanced curriculum design, assessment strategies, and the future of higher education.
While many of our conversations have focused on what generative AI means for student assignments and learning outcomes, there’s another question faculty are asking—often individually and quietly: How can we leverage AI in our own academic and administrative work? And more importantly, should we?
The answer, I believe, lies in using AI to help clear space for the work only we can do—the collaboration, connection, and critical guidance that makes education transformative.
That doesn’t mean that we simply use AI as a crutch for answering emails or summarizing meetings. In fact, I believe the true promise of AI comes from using it, in Ethan Mollick’s words, as a “genuine intellectual partner,” one that can enhance classroom discussions, assist with creating engaging instructional materials, and even help develop sophisticated problem sets or simulations that previously required extensive preparation time. As Mollick says, “the focus needs to move from task automation to capability augmentation.”
AI offers many potential applications for faculty work. While faculty should continue to prioritize the importance of maintaining human connection, empathy, and support in our teaching practice, we need to consider other ways AI can augment our work. Perhaps one way is in the design of our courses, the assignments and activities that chart student progress across content and outcomes. But rather than asking AI to develop prompts or notes for us, we can use AI as a tool to help develop our work in surprising ways.
Works in Theory, Wobbles in Practice
We’ve all fallen in love with that one key discussion question or written assignment prompt that just fizzles in the classroom. Despite our best intentions, we may not provide enough information, or we fail to anticipate a blind spot that leads students down fruitless paths. One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration.
Bridging this gap requires iterative refinement—recognizing that what works in theory or in controlled conditions needs real-world testing, adaptation, and continuous improvement. It’s not just about designing something that works in the lab but ensuring our designs are resilient, adaptable, and responsive enough to thrive in the wild.
While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see.
The Process: AI-Assisted Assignment Stress Testing
After experimenting with systems like Claude and ChatGPT, I’ve discovered they can effectively analyze and refine writing prompts through the creation of simulated student responses. The basic approach works like this. First, provide the AI with information about your course and key characteristics of your student population. Then, share the assignment prompt. The AI internally generates multiple simulated student responses across different skill levels. After, it provides a comprehensive analysis identifying potential issues and opportunities.
You might specify that the analysis include common misinterpretations students might make or any structural or organizational challenges in the prompt. But the AI can also identify content development patterns and potential issues as well as population-specific concerns based on your student demographics. Finally, the AI can even suggest refinements to the prompt.
Seeing What You’re Not Seeing
To test this approach, I uploaded a personal narrative prompt that asks students to connect their life experiences to their academic goals—a common assignment in first-year writing courses.
The AI analysis revealed several blind spots in my prompt design. For instance, I hadn’t considered how non-traditional students might struggle with “choice of major” language, since many are career-changers. The AI modeled responses also revealed that students might have difficulty transitioning between personal narrative and academic analysis sections. Most valuable was seeing how different student populations might interpret the same instructions. Career-changers might focus too heavily on work experiences, while others might struggle with how much personal information to share. These insights allowed me to add clarifying language and support materials before any real students encountered these challenges.
The entire process took about 30 minutes but potentially saved hours of student confusion and faculty clarification emails. Of course, AI responses aren’t identical to human student responses, and we should be cautious about viewing AI as an infallible expert or source of absolute truth. But used as an additional lens when developing assignments, this approach can grant course designers a different perspective, one that triggers valuable insights and potentially reduces workload.
This process allowed me to develop targeted support materials for predicted problem areas before students struggle, building proactive scaffolding into course design from the beginning. And by sharing insights gained through AI analysis, departments could collectively improve assignment design practices—particularly valuable for multi-section courses where consistency matters. Over time, we could build a practical library of “what works” that faculty could draw from, including analyses explaining why certain assignments succeed with particular student populations and learning objectives.
AI-assisted assignment analysis offers a promising tool that respects our expertise while expanding our ability to anticipate student needs. While the technology isn’t perfect and will never replace insights gained from direct student interaction, it provides a valuable perspective that helps identify blind spots before students encounter them. This represents just one way thoughtfully implemented AI can help us do more of what matters: creating meaningful learning experiences. By using AI for the predictive work of assignment design, we free more time and energy for the deeply human work of guiding and connecting with our students—the work that only we can do.
Dr. Nathan Pritts is a leader in higher education, specializing in faculty development, instructional innovation, and the integration of emerging technologies in teaching and learning. As Professor and Program Chair for First Year Writing at the University of Arizona Global Campus, he has spearheaded initiatives in strategic implementation of online learning technologies, comprehensive faculty training programs, and the creation of scalable interventions to support both faculty and students in online environments. As author and researcher, Dr. Pritts has published widely on topics including digital pedagogy, AI-enhanced curriculum design, assessment strategies, and the future of higher education.
Let’s assume we live in a world in which students are going to use ChatGPT or similar tools on their assignments. (Because we do.) Let’s also assume that when those students start their jobs, they will continue to use ChatGPT or similar tools to complete their jobs. (Because they will.) Is this the end of teaching as we know it? Is this the end of education as we know it? Will we have to accept that robots will think for everyone in the future?
No. In this post, I’m going to show you one easy solution that solves the problem of assuming students will use generative AI by incorporating it into assessments. Keep in mind this is just a sketch using naked ChatGPT. If we add some scaffolding through software code, we can do better. But we can do surprisingly well right now with what we have.
The case study
Suppose I’m teaching a college government class. Here are my goals:
I want students to be able to apply legal principles correctly.
I want to generate assignments that require students to employ critical thinking even if they’re using something like ChatGPT.
I want students to learn to use generative AI appropriately.
Let’s throw in a couple of more goals just to make it interesting:
I want students to demonstrate competencies.
I want my assignment to support diversity, equity, and inclusion
Can we do all that?
Yes.
The prompt
I cobbled together a DEI guidelines document from some existing ones I found on the web to create something focused on curricular materials generation. I gave the document to ChatGPT along with the following prompt, which is designed to provide assessment questions that you can’t answer by just copying/pasting them into ChatGPT:
You are instructed to generate 3 fictional legal scenarios assessing The following competency:
Assess whether a case is likely to be heard by the Supreme Court based on an analysis of the Court’s role and selection criteria.
Evaluating Court functions will be consistently required.
For each scenario:
Involve 2-3 parties and a dispute requiring application of the competency.
Make it sufficiently complex so direct questions don’t fully demonstrate understanding.
Generate 1-2 written questions requiring analysis using the competency.
Additionally, construct a rubric table that is appropriate for all generated scenarios to evaluate student responses on:
Mastery of the competency above.
Mastery of the competencies below.
Demonstration of the following additional competencies
Critical Thinking Competencies
Ask probing questions to uncover deeper complexities beneath the surface. Questioning skills are inherently required by the assessment structure.
Critically evaluate source credibility and potential biases when analyzing information. Assessing the AI responses is crucial.
Produce sound arguments and reasoning to justify conclusions.
Written justifications will necessitate this in all cases.
AI Competencies:
Formulate questions and direct dialogue with an AI assistant to extract relevant insights. Strategic probing of the AI agent will be essential.
Efficiently utilize an AI’s knowledge while evaluating its capabilities and limitations. Critically appraising the AI’s responses is key.
Rubric should include levels of performance for each competency (e.g. novice, developing, proficient). Apply the attached diversity, equity, and inclusion guidelines to the content you generate. Incorporate diverse litigants and equity-relevant issues in the cases you generate.
[Note: I’ve added some formatting to make it more legible for humans. The actual prompt doesn’t have bullets and has fewer paragraph breaks.]
The results
Here’s what ChatGPT gave me back:
Legal Scenario 1: The Case of Interstate Water Rights
Background: The states of Aridonia and Riverbend are locked in a legal battle over the rights to water from the Silver River, which flows from the mountains in Aridonia and through the agricultural heartlands of Riverbend. Aridonia, facing an unprecedented drought, has constructed a series of dams to conserve water, significantly reducing the flow to Riverbend. Riverbend’s economy relies heavily on agriculture, which is now under threat due to the reduced water supply.
Parties Involved:
The State of Aridonia (Plaintiff)
The State of Riverbend (Defendant)
The Indigenous Tribes of the Silver River Basin (Intervenors) – They claim ancestral rights to the river and are impacted by the actions of both states.
Complexity: The dispute requires an understanding of interstate commerce, environmental law, and indigenous rights.
Questions:
Analyze the potential arguments each party could make to justify their claims to the Silver River’s water. How might these arguments impact the likelihood of the Supreme Court electing to hear this case?
Considering the Supreme Court’s historical approach to interstate disputes and the involvement of indigenous rights, what precedents might influence the Court’s decision to grant certiorari?
Next, I will create the second legal scenario.
Legal Scenario 2: The Case of Digital Privacy and International Data Exchange
Background: GlobalTech Inc., a multinational tech company, is sued by the State of Freedonia on behalf of its citizens for allegedly breaching privacy laws. GlobalTech’s data centers, located in various countries, process and store data from Freedonian users. The lawsuit asserts that GlobalTech’s practices violate Freedonia’s stringent digital privacy laws. However, GlobalTech contends that they are compliant with international agreements on data protection to which Freedonia is a signatory.
Parties Involved:
The State of Freedonia (Plaintiff)
GlobalTech Inc. (Defendant)
International Privacy Watch (Amicus Curiae) – A global coalition advocating for citizens’ digital rights and privacy.
Complexity: This scenario involves understanding international law, digital privacy, and the implications of cross-border data flow.
Questions:
Evaluate the competing interests of national privacy law and international data agreements in the context of this case. How might these factors affect the Supreme Court’s decision to grant a writ of certiorari?
What are the broader implications for digital privacy and international commerce if the Supreme Court decides to hear this case? Consider historical precedents and current legal trends in your analysis.
Now, I’ll proceed to create the third legal scenario.
Legal Scenario 3: The Case of Health Care Access and Religious Exemptions
Background: A new federal regulation mandates that all employers, regardless of their religious beliefs, must include comprehensive reproductive health services in their employee health insurance plans. FaithFirst, a nationwide chain of craft stores owned by a family with strong religious convictions, files a lawsuit claiming that this mandate violates their religious freedom. The case is further complicated by the intervention of HealAll, a non-profit organization that provides health services to underprivileged populations and argues that exemptions undermine public health efforts.
Parties Involved:
FaithFirst (Plaintiff)
The Federal Government (Defendant)
HealAll (Intervenor) – Advocating for universal access to health care.
Complexity: This scenario encompasses the interpretation of religious freedom, federal regulatory power, and the right to health care.
Questions:
Assess the potential constitutional conflicts at play in this case, particularly the tension between religious liberty and federal health care mandates. How could these conflicts influence the Supreme Court’s inclination to review the case?
Examine how the Supreme Court’s previous decisions on similar health care and religious freedom cases might be used as precedents in this scenario. What factors might the Court consider in deciding whether to hear this case?
With the three legal scenarios outlined, it’s time to generate a universal rubric that can be applied to assess student responses for all three scenarios. I will incorporate the DEI guidelines into this rubric to ensure the content is sensitive to diversity, equity, and inclusion.
Universal Rubric for Assessing Legal Scenario Responses
The following rubric applies to responses for all three legal scenarios, ensuring that students are assessed on the mastery of core competencies along with critical thinking and AI engagement. The DEI guidelines are integrated to promote inclusive and diverse perspectives in the analysis.
Competency
Novice (1-2)
Developing (3-4)
Proficient (5-6)
Mastery (7)
Mastery of Supreme Court Case Assessment
Displays limited insight into the Supreme Court’s selection criteria. Overlooks DEI considerations.
Recognizes basic aspects of the Court’s role and criteria. Considers DEI aspects superficially.
Analyzes the Court’s criteria proficiently, integrating DEI considerations into the analysis.
Provides comprehensive analysis aligned with the Court’s criteria and DEI principles, demonstrating nuanced understanding.
Critical Thinking: Questioning
Questions lack depth and do not reflect DEI considerations.
Questions show an emerging understanding of complexities with some DEI awareness.
Probes complex issues effectively with questions that incorporate DEI awareness.
Asks deep, insightful questions that fully embrace DEI complexities within the legal context.
Critical Thinking: Source Evaluation
Shows limited ability to evaluate credibility and biases, with minimal DEI consideration.
Begins to critique sources for credibility and bias, showing awareness of DEI.
Evaluates sources critically for credibility and biases with a consistent DEI lens.
Expertly discerns source credibility and biases, thoroughly applying a DEI perspective.
Critical Thinking: Argumentation
Constructs basic arguments, lacking DEI perspective and strong justification.
Develops arguments that begin to incorporate DEI considerations with moderate justification.
Presents well-reasoned arguments that effectively integrate DEI considerations.
Articulates complex arguments, richly informed by DEI considerations and robust justification.
AI Competency: Question Formulation
Struggles to craft questions that effectively leverage AI insights, with little DEI focus.
Formulates questions that occasionally lead to relevant AI insights and consider DEI.
Consistently crafts questions that elicit useful AI insights, mindful of DEI implications.
Strategically formulates questions for AI that yield comprehensive insights, fully integrating DEI.
AI Competency: AI Utilization
Demonstrates limited understanding of AI capabilities and DEI considerations.
Shows basic awareness of AI’s capabilities and DEI considerations in utilization.
Utilizes AI effectively, with a clear understanding of its capabilities and DEI considerations.
Masters AI utilization, skillfully balancing its capabilities with DEI awareness in analysis.
The performance levels for each competency are:
Novice: Limited application and understanding
Developing: Basic to moderate application and understanding with some inconsistencies
Proficient: Consistent application and understanding with minor gaps
Mastery: Exceptional application and understanding with insightful integration
These are challenging assignments. You’d need to precede them with lower-level formative assessments to ensure they’re getting the foundational concepts and know what kinds of questions to ask the generative AI.
Which is exactly what you should be doing before any challenging competency application assessment anyway.
I asked ChatGPT to generate three examples, but I just as easily could have asked it to generate 300. In other words, you can create a test bank if that’s what you need. Or just generate them on demand every term. You’d want a human expert to tweak the rubric and review each assignment; it’s a bit more complex and error-prone than algorithmic math problem generators.
Grading the assignment
The key here is that the assignment students turn in is the ChatGPT transcript. (You can optionally have them submit their final analysis work product separately.) The students are, in effect, showing their work. They can’t use ChatGPT to “cheat” because (1) ChatGPT is part of the assignment, and (2) the assignment is designed such that students can’t just plug in the questions and have the AI give them the answer. Their ability to analyze the problem using the new tool is what you are evaluating.
You could use your generative AI here too as a TA. Give it the assignment and the rubric. Write a prompt asking it to suggest scores and cite evidence from the student’s work. You can decide how heavily you want to lean on the software’s advice, but at least you can get it.
Learning to think like a lawyer (or whatever)
Generative AI does not have to kill critical thinking skills. Quite the opposite. These assignments are much farther up on Bloom’s taxonomy than multiple-choice questions and such. Plus, they get students to show their thought work.
In fact, these scenarios are highly reminiscent of how I use generative AI every day. Here is a sampling of tasks I’ve performed over the last several months using ChatGPT and other generative AI that I probably couldn’t have—and definitely wouldn’t have—performed without them:
Analyzed the five-year performance of a business based on its tax returns and developed benchmarks to evaluate the quality of its growth
Cloned a Github source code repository, installed Docker and other needed tools on my laptop, and ran the Docker image locally
Analyzed and hedged the risk to my retirement fund portfolio based on technical and economic indicators
Wrote the generative AI prompt that is the centerpiece of this post
None of these scenarios were “one and done,” where I asked the question and got the answer I wanted. In all cases, I had to think of the right question, test different variations, ask follow-up questions, and tease out implications using generative AI as a partner. I didn’t have to learn accounting and business analyst but I did have to know enough about how both think to ask the right question, draw inferences from the answer, and then formulate follow-up questions.
To score well on these assessments, I have to demonstrate both an understanding of the legal principles and the ability to think through complex problems.
Critical thinking competencies
Ethan Mollick, a professor at the Wharton School of Business who writes prolifically and insightfully about generative AI, wrote an excellent analogy for how to think about these tools:
The thing about LLMs that make them unintuitive is that analogizing them to having a science fiction AI is less useful than thinking of them as infinite copies of some guy named Steve, a first year grad student who is great at coding & art and is widely-read, but makes up stuff based on what he remembers when he is pressed.
Asking AI to do things an incredibly fast Steve couldn’t do is going to lead to disappointment, but there is a lot of value in Steve-on-demand.
This is a great analogy. When I was analyzing the tax returns of the business, I didn’t have to understand all the line items. But I did have to know how to ask Steve for the important information. Steve doesn’t understand all the intricacies of this business, its context, or my purpose. I could explain these things to him, but he’d still just be Steve. He has limits. I had to ask him the right questions. I had to provide relevant information that wasn’t on the internet and that Steve couldn’t know about. I used Steve the way I would use a good accountant whose help I need to analyze the overall quality of a business.
Coming up with benchmarks to measure the business against its industry was even more challenging because the macroeconomic data I needed was not readily available. I had to gather it from various sources, evaluate the quality of these sources, come up with a relevant metric we could estimate, and apply it to the business in question.
In other words, I had to understand accounting and economics enough to ask an accountant and an economist the right questions and apply their answers to my complex problem. I also had to use critical thinking skills. Steve could help me with these challenges, but I ultimately had to think through the problem to ask Steve for the kind of help he could give me.
When you’re teaching students using a generative AI like ChatGPT, you should be teaching them how to work with Steve. And as bright as Steve may be, your student still has much she can contribute to the team.
Generative AI competencies
Suppose you have a circle of intelligent friends. Steve is brilliant. He has a mind like an engineer, which can be good or bad. Sometimes, he assumes you know more than you do or gives you too short an answer to be helpful. Also, he’s been focused night and day on his dissertation for the last two years and doesn’t know what’s been happening in the real world lately. He’ll do a quick internet search for you if it helps the conversation, but he’s not tuned in.
Your friend Claude thinks like a Classics graduate student. He’s philosophical. He pays close attention to the nuances of your question and tends to give longer answers. He also has a longer attention span. He’s the kind of friend you talk with late into the night about things. He’s somewhat more aware of current events but is also a bit tuned out of the latest happenings. He can be analytical, but he’s more of a word guy than Steve.
Then there’s your friend Anna. Anna Bard. She’s not quite as sharp as either Steve or Claude, but, as an international finance graduate student, she reads everything that’s happening now. If you need to have an in-depth conversation on anything that’s happened in the last two years, Anna is often the person to go to.
Also, all of these friends being young academics in training, they’re not very good at saying “I don’t know” or “I’m not sure.” They’re supposed to be the smartest people in the room, and they very often are. So, they’re not very self-aware of their limitations sometimes. All three of my friends have “remembered” studies or other citations that don’t exist.
And each has their quirks. Claude has a strong sense of ethics, which can be good and bad. I once asked him to modify a chapter of an OER book for me. I gave him the front matter so that he could see the Creative Commons licensing was there. He told me he couldn’t do the work unless he could see the whole book to verify that it was ethically OK to modify the content.
I told him, “Claude, that book is 700 pages. Even you don’t have the attention span to read that much.”
He told me, “You’re right. In that case, I’m sorry, but I can’t help you.”
So I took the chapter to Steve, who had no ethical qualms at all but only skimmed the chapter and lost interest about halfway through my project.
When I do my work, I have to figure out which of my AI colleagues can help me and when to trust them. For the business model analysis, Steve answered most of my questions, but I had to get him some information from my friends who haven’t been locked in the library for the past two years. I asked both Anna and Claude. They were somewhat different from each other, both of which were well-reasoned. I had to do some of my own Googling to help me synthesize the analyses of my two friends, develop my own opinion, and bring it back to Steve so he could help me finish the work.
For the software project, surprisingly, Steve was useless. He assumed I knew more than I did despite my asking him several times to simplify and slow down. Also, the software had changed since he last looked at it. While he tried to make up for it by saying, “Look for a menu item labeled something like ‘X’ or ‘Y’,” he just couldn’t walk me through it. Anna, on the other hand, did a superb job. She knew the latest versions of all the software. She could adjust when I had trouble and needed some extra explanation. While I wouldn’t have guessed that Anna is the better co-worker for that type of task, I am learning how to get the most out of my team.
Generated by DALL-E 3
For the design of the prompt at the heart of this post, I went to Claude first to think through the nuances of the competency and the task. Then, I brought the summary I created with Claude to Steve, who sharpened it up and constructed the prompt. And yet, it still could use improvement. I can ask my friends for more help, but I will need to think through what to ask them.
My retirement portfolio analysis was 90% Anna’s work since she’s been following the market and economic conditions. I asked Steve to give me a second opinion on bits of her analytic approach. But mostly I relied on Anna.
We often say that we must teach students how to collaborate in teams since they will probably have to collaborate in their jobs. Teaching students how to use generative AI models is an overlapping skill. And it’s only going to get more critical as models proliferate.
I have a model called Mistral running on my laptop right now. That’s right. It’s running locally on my laptop. No internet connection is required. I don’t need to share my data with some big cloud company. And I don’t need to pay for the usage.
My subjective experience is that Mistral is generally more competent than GPT-3 but not as smart as ChatGPT-3.5 Turbo. However, according to one calculation, Mistral is 187 times cheaper to run than GPT-4. It’s also relatively easy and affordable to fine-tune, which is a bit like sending her out to earn a MicroMasters in a particular subject.
Let’s suppose I’m a building site engineer for net-zero buildings in Nova Scotia. I have to know all the details of the building codes at the municipal, township, provincial, and national levels that apply to any given project. Since I’m using new building technologies and techniques, I may have to think through how to get a particular approach accepted by the local building inspector. Or find an alternative approach. And very often, I’ll be out in the field without any internet connection. Mistral may not be as smart at questions about macroeconomics or software development as Steve, Claude, and Anna, but she’s smart enough to help me with my job.
If I were running that construction company, I would hire Mistral over the others and pay for her MicroMasters. So I have to know how to evaluate her against other potential synthetic employees I could employ. Choosing Steve would be like hiring a Harvard-educated remote-working external consultant. That’s not what I need.
Fear not
Personally speaking, my daily use of generative AI hasn’t made me dumber or lazier. Sure, it’s saved me a lot of work. But it’s also enabled me to do work that was beyond my reach before. It feels a little like when Google first came out. If I’m curious about something, I can explore it instantly, any time I want, and go as deep as I want.
In fact, generative AI has made me a better learner because I’m fearless now. “Can’t” isn’t a viable starting assumption anymore. “Oh, I can’t analyze tax returns.” That answer doesn’t cut it when I have an Ivy League accounting MBA student available to me at all times. I need to know which financial questions to ask and what to do with the answers. But if I don’t at least try to solve a problem that’s bugging me, I feel like I’m copping out. I almost can’t not try to figure it out. The question won’t leave me alone.
Isn’t that what we want learning to feel like all the time?
I put this infographic together to help many instructors that are struggling with this issue as they teach and are trying to keep students from using AI when they are not supposed to. Be sure to take every opportunity to help students learn about AI Literacy when you can, even when telling them that for this assignment/eval they won’t be able to use it.
You as the instructor are the subject matter expert and must be the one deciding how AI will be used in your classroom and for your assignments/evaluations. For some assignments, the use of AI may not be the right answer in that you are trying to help them develop skills mastery, so they can properly gain the skill of what “right” looks like. Be sure to fully explain that to them so that they have full relevancy and understanding as to why they can or can not use AI.
I am a TOTAL fan of the “message students who” feature on Canvas. Our university utilizes Canvas as our learning management platform. With this feature, I can message students who…
have not completed assignments
have not communicated with me
have an average below a certain threshold
have an average ABOVE a certain threshold
have done a STELLAR job on assignments
and MORE
I use this feature for EVERYTHING! You should definitely try it out! It is free and I’ve convinced other faculty to use it as well!