Artificial Intelligence (AI) is driving one of the most significant transformations in academic publishing since the advent of peer review. There has been a steady increase in the use of AI in research manuscripts since 2022, when OpenAI launched ChatGPT. Recent data demonstrate that up to 22% of computer science papers indicate the usage of large language models (LLMs). Another study on LLM use in scientific publishing revealed that 16.9% of peer review texts contained AI content.
The integration of AI tools in every stage of publication has been met with appreciation and criticism in equal measure. Proponents advocate AI as a catalyst that boosts efficiency by automating mundane tasks such as grammar editing, reference formatting, and initial screening. However, critics warn against possible pitfalls in quality and ethical considerations. The question remains: is AI disrupting traditional publishing models, or is it fostering a natural evolution within the academic enterprise?
Rise of AI in Research and Publication
AI has become embedded throughout the research lifecycle, from idea generation to final manuscript submission. This pervasive adoption is reshaping researchers’ approach to their work; they use applications for grammar checks, plagiarism detection, format compliance, and even assessment of their research significance.
Recent studies show that LLMs have an unmatched ability to generate prose and summarize content, enabling researchers to write literature reviews and experimental descriptions more efficiently. In fact, many researchers utilize features of LLMs to assist them with brainstorming, rephrasing, and clarifying their arguments.
Besides writing assistance, AI-driven platforms can streamline other laborious tasks. For instance, searching massive databases to find relevant citations and assessing the context and reliability of references. Likewise, available text translation and summarization tools support 30+ languages— a feature that breaks down linguistic barriers and allows for international collaboration in research.
One perspective piece published in PLOS Biology notes: “A primary reason that science has not yet become fully multilingual is that translation can be slow and costly. Artificial intelligence (AI), however, may finally allow us to overcome this problem, as it can provide useful, often free or affordable, support in language editing and translation”.
The integration of AI tools in research and publication has fundamentally shifted research workflows. Automation technologies have quickened the pace of writing and made academic writing and publishing more accessible.
Ethical Implications and Risks
The increased efficiency and renewed research approach also come with significant ethical concerns. AI systems operate by learning patterns in existing data and can amplify hidden biases. If an AI tool is trained on past editorial outcomes, it may score submissions from well-known institutions or English-speaking authors more highly.
Stanford researchers found that even when non-native English researchers use LLMs to refine their submissions, peer reviewers still show bias against them. Another study revealed that AI text-detection tools often misidentify non-native English writing as machine-generated. In other words, if there are two authors with equal merits, they may face unequal scrutiny if one uses slightly different phrasing.
Beyond systemic biases, there is concern that AI integration into the peer review process can lead to over-reliance on its functionalities. An analysis of peer review highlighted that overdependence of editors and reviewers on AI-generated suggestions can result in deterioration in review quality, and not to mention, factual mistakes that slip through evaluation.
Even as AI is used for initial screening, the recommendations made by such tools about a manuscript’s quality or reviewer selection may be opaque or even include hallucinated reasoning. In the absence of transparency, it is difficult to identify and correct misjudgments. This accountability challenge can seriously undermine trust in the editorial process.
However, the most concerning risk is the potential of AI to create a feedback loop of maintaining the status quo. AIs are trained on existing published literature and are designed to evaluate new submissions based on the present data. Given this data, it is likely that the AI system may inadvertently suppress new and innovative ideas that do not coincide with established patterns.
The Irreplaceable Role of Human Editorial Judgment
Despite these technological advances, the fundamental responsibility for maintaining scientific integrity ultimately rests with human editors and reviewers. Academic publishers serve as gatekeepers of knowledge, shaping what research reaches the broader scientific community and, by extension, informing public understanding and policy decisions. This role carries immense responsibility. Editorial decisions can accelerate breakthrough discoveries or inadvertently stifle groundbreaking research that challenges conventional thinking.
Rather than delegating these critical judgements to algorithms, the academic publishing community must recognize this moment as a call to elevate editorial standards and practices. Editors must recommit to rigorous, nuanced evaluation that prioritizes scientific merit over efficiency metrics. The stakes are too high– the advancement of human knowledge and credibility of scientific enterprise itself– to entrust these decisions to systems that, however sophisticated, lack the contextual understanding, ethical reasoning, and innovative thinking that human expertise provides.
Dr. Su Yeong Kim is a Professor of Human Development and Family Sciences at the University of Texas at Austin. A leading figure in research on immigrant families and adolescent development, she earned the honor of being a Fellow on multiple national psychology associations and is the Editor of the Journal of Research on Adolescence. Dr. Kim has authored and published more than 160 works and is recognized with Division 45 of American Psychological Association’s Distinguished Career Contributions to Research Award. Her research, funded by the National Institutes of Health, National Science Foundation, and other bodies, covers bilingualism, language brokering, and cultural stressors in immigrant-origin youth. Dr. Kim is an enthusiastic mentor and community advocate, as well as a member of UT’s Provost’s Distinguished Leadership Service Academy.
Kim, H., J. C. Little, J. Li, B. Patel, and D. Kalderon. “Hedgehog-Stimulated Phosphorylation at Multiple Sites Activates Ci by Altering Ci–Ci Interfaces without Full Suppressor of Fused Dissociation.” PLOS Biology 23, no. 4 (2025): e3003105. https://doi.org/10.1371/journal.pbio.3003105
AI is now embedded in teaching and learning. As educators, how do we help students benefit from AI without slipping into dependency, surface-level work, or ethical misconduct? I’ve found a helpful way to clarify conversations with students (and with myself) by thinking of AI as a teammate (or teammates) with clearly defined roles: the Tasker, the Draftsmith, and the Facilitator.
Imagine a meeting of a team of people: someone handles logistics, someone sketches ideas, someone pushes discussion. Framing AI this way has made it easier to talk about students remaining responsible for how and to what purposes they use AI. My approach draws on recent scholarship about AI reshaping how teams work (Dell’Acqua et al., 2025).
The AI Teammates
The Tasker
The Tasker deals with repetitive or procedural work such as managing our calendars, formatting citations, or cleaning datasets. These tasks often are tedious but essential and using AI frees time that can be focused on deeper thinking. To use a Tasker, we can set up a Robotic Process Automation (RPA) with AI, use embedded AI tools such as Copilot in MS:Word or Claude in MS:Excel or Google sheets, or create what is currently referred to as an LLM “Agent.” As Farri & Rosani (2025) describe in their HBR guide to generative AI for managers, we can use AI to lighten the load on procedural chores, affording us the time and energy to engage more with deeper thinking and learning.
Because AI output may contain mistakes, students must learn to identify the levels of risks they believe are acceptable for the given task before turning to AI and must validate its output. The level of validation required may depend on the risk level, needed accuracy and quality, and current knowledge of the tool’s metrics.
We approach the Tasker differently than we do the Draftsmith and Facilitator: we can set our RPA or Agent up to run and only involve ourselves in its applications until we’re sure it can work independently and accurately, or we can trigger the Tasker each time we need it. Students must learn to set aside sufficient validation time before designing a Tasker.
The Draftsmith
The Draftsmith can spot passive voice in students’ writing and explain why it’s problematic, convert an outline into a draft of a PowerPoint, and produce study tools such as multiple choice or essay questions, podcasts, and videos from their course notes. My engineering students have prompted their Draftsmiths using a prompt library I created specifically for them to help them convert their design clients’ stated needs into project requirements. Bussgang (2025) has pointed out how small steps like these can dramatically boost productivity without sacrificing learning.
While AI may generate text or ideas, the final product must be the student’s own (Duffy, 2025). I teach my students that, much to their dismay, they must develop their own writers’ voice before using AI and that this process can take years and many writing efforts! Our writer’s voice presents us to our bosses, colleagues, and clients given that much of our communication at work is through emails. College is the best time to invest in this effort because it’s rare to get opportunities later to do so.
From students’ comments, I believe they use AI in the Draftsmith role more frequently than the other two roles. Few realize the Draftsmith increases cognitive load when they use it correctly and decreases their learning when they use it incorrectly!
It increases the load because they must provide the AI with significant context for it to provide anything other than banal, common prose that flattens their own voice (Purohit, 2025) or “creative” prose that is meaningless. AI use decreases their learning when they use it to write first and second drafts because writing sharpens our thinking and embeds learned concepts into our long-term memory.
The Facilitator
Instead of using AI to generate content (the Draftsmith), the Facilitator is a use of AI as a thinking partner (Solis, 2024 and HBSP’s ManageMentor course 2025). A student might ask AI to play devil’s advocate, suggest counterarguments, or pose probing questions when their paper’s argument needs strengthening. AI might help simulate a stakeholder’s perspective with whom the student role plays to prepare for a meeting or consider how their ideas might land with their student group peers. My students have used AI in the Facilitator role to pressure test their design ideas, review their research plans, and prepare to give presentations. When students use the Facilitator, they’re reflecting (Harbridge, 2025) and thereby deepening their learning.
To get the best value from AI as a Facilitator, the student must approach it with the expectation that it will take considerable time. The Facilitator provides meaningful interaction only when the student gives it their full attention, asks probing questions, provides significant context, and is patient. They may need to iterate, restating their prompts or providing more information, and to ask the LLM to be candid and critical. They may need to step away from the LLM, perhaps try a different one or come back on another day, when it goes off in the wrong direction.
We have become too used to scanning an LLM’s output. Just as I would not expect a student who came to me for career advice to start scrolling on their phone while I’m talking, I must teach them to give their full attention to the Facilitator.
What I’m suggesting here is not a list of use cases. Instead, I’m providing students with an approach to using AI where they plan before typing into the chat box, see its best use as a Facilitator that partners with them in learning and development, and recognize that validation and reading output will be time and cognitively intensive.
What This Means for Teaching
I’ve found that when I teach these roles to students before assigning homework, they make better choices regarding when and how to use AI. Here are some tactics I’ve tried that have netted positive results:
Design assignments that invite different roles. For example, ask students to begin with AI as a Tasker (perhaps organizing sources or cleaning a dataset), then shift to using the Draftsmith to help them find passive voice or other syntax issues in a paper, and finally draw on the Facilitator to find the holes in their arguments before submitting a final paper. Intentionally stating the role each assignment calls for and requiring them to state when they’re using the Tasker, Draftsmith, or Facilitator solidifies their understanding that their approach to AI differs by role.
Be clear about where AI helps and where it doesn’t. AI can help with structure, suggestions, and ideation, but it cannot replace revision, critical thinking, or the development of one’s own voice. This must be intentionally taught by including questions requiring students to reflect on what worked and where the AI led them astray in any assignments allowing AI use.
Encourage disclosure. I talk extensively about “Total AI Transparency”, ensuring students that I will note any use of AI in my communications with them and I expect the same from them. Trust has broken down as both students and instructors believe the other is using AI when they’re not and failing to recognize when they are. Owning up to my own use and being clear on why I used AI for that task encourages students to do the same.
Closing Thoughts
We need to steer students away from extremes of either refusing to use AI at all or letting it rob them of learning. By distinguishing the roles of Tasker, Draftsmith, and Facilitator, we clarify our expectations of students. More importantly, we help them become more deliberate in their approach, preparing them to be AI-fluent.
Illysa Izenberg is an Associate Teaching Professor for the Center for Leadership Education in the Whiting School of Engineering at The Johns Hopkins University. She has been teaching management and business ethics to graduate and undergraduate students via Face-to-Face, online, and blended courses since 2006. Izenberg earned her MBA from the Harvard Graduate School of Business and is the winner of both the Alumni and Pond Excellence in Teaching Awards (2016 and 2020 respectively).
Harbridge, R. (2025). AI for Team Leaders [Online course]. SectionAI.
Harvard Business Publishing. (2025). Help your team harness generative AI (Lesson 1). In Leading with generative AI [Online course]. Harvard ManageMentor.
Purohit, R. (Host). (2025, June 18). The man inside the minds of the people building AGI [Audio podcast episode]. In AI & I. Every.
Solis, B. (2024, November). Train your brain to work creatively with Gen AI. Harvard Business School Publishing.
AI is now embedded in teaching and learning. As educators, how do we help students benefit from AI without slipping into dependency, surface-level work, or ethical misconduct? I’ve found a helpful way to clarify conversations with students (and with myself) by thinking of AI as a teammate (or teammates) with clearly defined roles: the Tasker, the Draftsmith, and the Facilitator.
Imagine a meeting of a team of people: someone handles logistics, someone sketches ideas, someone pushes discussion. Framing AI this way has made it easier to talk about students remaining responsible for how and to what purposes they use AI. My approach draws on recent scholarship about AI reshaping how teams work (Dell’Acqua et al., 2025).
The AI Teammates
The Tasker
The Tasker deals with repetitive or procedural work such as managing our calendars, formatting citations, or cleaning datasets. These tasks often are tedious but essential and using AI frees time that can be focused on deeper thinking. To use a Tasker, we can set up a Robotic Process Automation (RPA) with AI, use embedded AI tools such as Copilot in MS:Word or Claude in MS:Excel or Google sheets, or create what is currently referred to as an LLM “Agent.” As Farri & Rosani (2025) describe in their HBR guide to generative AI for managers, we can use AI to lighten the load on procedural chores, affording us the time and energy to engage more with deeper thinking and learning.
Because AI output may contain mistakes, students must learn to identify the levels of risks they believe are acceptable for the given task before turning to AI and must validate its output. The level of validation required may depend on the risk level, needed accuracy and quality, and current knowledge of the tool’s metrics.
We approach the Tasker differently than we do the Draftsmith and Facilitator: we can set our RPA or Agent up to run and only involve ourselves in its applications until we’re sure it can work independently and accurately, or we can trigger the Tasker each time we need it. Students must learn to set aside sufficient validation time before designing a Tasker.
The Draftsmith
The Draftsmith can spot passive voice in students’ writing and explain why it’s problematic, convert an outline into a draft of a PowerPoint, and produce study tools such as multiple choice or essay questions, podcasts, and videos from their course notes. My engineering students have prompted their Draftsmiths using a prompt library I created specifically for them to help them convert their design clients’ stated needs into project requirements. Bussgang (2025) has pointed out how small steps like these can dramatically boost productivity without sacrificing learning.
While AI may generate text or ideas, the final product must be the student’s own (Duffy, 2025). I teach my students that, much to their dismay, they must develop their own writers’ voice before using AI and that this process can take years and many writing efforts! Our writer’s voice presents us to our bosses, colleagues, and clients given that much of our communication at work is through emails. College is the best time to invest in this effort because it’s rare to get opportunities later to do so.
From students’ comments, I believe they use AI in the Draftsmith role more frequently than the other two roles. Few realize the Draftsmith increases cognitive load when they use it correctly and decreases their learning when they use it incorrectly!
It increases the load because they must provide the AI with significant context for it to provide anything other than banal, common prose that flattens their own voice (Purohit, 2025) or “creative” prose that is meaningless. AI use decreases their learning when they use it to write first and second drafts because writing sharpens our thinking and embeds learned concepts into our long-term memory.
The Facilitator
Instead of using AI to generate content (the Draftsmith), the Facilitator is a use of AI as a thinking partner (Solis, 2024 and HBSP’s ManageMentor course 2025). A student might ask AI to play devil’s advocate, suggest counterarguments, or pose probing questions when their paper’s argument needs strengthening. AI might help simulate a stakeholder’s perspective with whom the student role plays to prepare for a meeting or consider how their ideas might land with their student group peers. My students have used AI in the Facilitator role to pressure test their design ideas, review their research plans, and prepare to give presentations. When students use the Facilitator, they’re reflecting (Harbridge, 2025) and thereby deepening their learning.
To get the best value from AI as a Facilitator, the student must approach it with the expectation that it will take considerable time. The Facilitator provides meaningful interaction only when the student gives it their full attention, asks probing questions, provides significant context, and is patient. They may need to iterate, restating their prompts or providing more information, and to ask the LLM to be candid and critical. They may need to step away from the LLM, perhaps try a different one or come back on another day, when it goes off in the wrong direction.
We have become too used to scanning an LLM’s output. Just as I would not expect a student who came to me for career advice to start scrolling on their phone while I’m talking, I must teach them to give their full attention to the Facilitator.
What I’m suggesting here is not a list of use cases. Instead, I’m providing students with an approach to using AI where they plan before typing into the chat box, see its best use as a Facilitator that partners with them in learning and development, and recognize that validation and reading output will be time and cognitively intensive.
What This Means for Teaching
I’ve found that when I teach these roles to students before assigning homework, they make better choices regarding when and how to use AI. Here are some tactics I’ve tried that have netted positive results:
Design assignments that invite different roles. For example, ask students to begin with AI as a Tasker (perhaps organizing sources or cleaning a dataset), then shift to using the Draftsmith to help them find passive voice or other syntax issues in a paper, and finally draw on the Facilitator to find the holes in their arguments before submitting a final paper. Intentionally stating the role each assignment calls for and requiring them to state when they’re using the Tasker, Draftsmith, or Facilitator solidifies their understanding that their approach to AI differs by role.
Be clear about where AI helps and where it doesn’t. AI can help with structure, suggestions, and ideation, but it cannot replace revision, critical thinking, or the development of one’s own voice. This must be intentionally taught by including questions requiring students to reflect on what worked and where the AI led them astray in any assignments allowing AI use.
Encourage disclosure. I talk extensively about “Total AI Transparency”, ensuring students that I will note any use of AI in my communications with them and I expect the same from them. Trust has broken down as both students and instructors believe the other is using AI when they’re not and failing to recognize when they are. Owning up to my own use and being clear on why I used AI for that task encourages students to do the same.
Closing Thoughts
We need to steer students away from extremes of either refusing to use AI at all or letting it rob them of learning. By distinguishing the roles of Tasker, Draftsmith, and Facilitator, we clarify our expectations of students. More importantly, we help them become more deliberate in their approach, preparing them to be AI-fluent.
Illysa Izenberg is an Associate Teaching Professor for the Center for Leadership Education in the Whiting School of Engineering at The Johns Hopkins University. She has been teaching management and business ethics to graduate and undergraduate students via Face-to-Face, online, and blended courses since 2006. Izenberg earned her MBA from the Harvard Graduate School of Business and is the winner of both the Alumni and Pond Excellence in Teaching Awards (2016 and 2020 respectively).
Harbridge, R. (2025). AI for Team Leaders [Online course]. SectionAI.
Harvard Business Publishing. (2025). Help your team harness generative AI (Lesson 1). In Leading with generative AI [Online course]. Harvard ManageMentor.
Purohit, R. (Host). (2025, June 18). The man inside the minds of the people building AGI [Audio podcast episode]. In AI & I. Every.
Solis, B. (2024, November). Train your brain to work creatively with Gen AI. Harvard Business School Publishing.
Distance learning is here to stay. Both students and educators were required to quickly pivot to distance platforms during the COVID-19 pandemic without adequate preparation or training (Basilotta-Gomez-Pablos et al. 2022). Many programs elected to keep distance and hybrid learning options for students, with excellent reason. These platforms improve convenience, access, and inclusivity, gaining fast traction. Now, many institutions have an opportunity, or arguably a responsibility, to provide educators with the support needed to be successful while teaching in distance modalities; concurrently, educators are responsible for seeking resources and training to help them leverage technologies to thrive in this distance space (Crompton and Sykora 2021).
The use of technology in distance platforms is important; educators need to navigate a learning platform but don’t necessarily need to become digital technology experts (Crompton and Sykora 2021). So, what is possibly most important? Finding tools that are simple to use and have a high impact. Many educators still report that they do not feel equipped to apply new technologies, although today’s adult learners prefer novel and engaging technology tools (Borte, Nesje, and Lillejord 2023). Therefore, us educators should discuss, introduce, and share these technologies as we discover them to figure this out as a team.
In this article, we will explore how collaborative technologies, specifically collaborative whiteboards, can help bring life to adult learning theories in synchronous learning classrooms.
Students in distance learning platforms often have fewer opportunities to work collaboratively, outside of a break-out-room model, providing challenges for meaningfully applying adult learning theories. Theoretically speaking, collaboration and social engagement are essential components of adult learning, specifically to create a sense of community with learners, which can be more challenging in distance learning classes (Barbetta 2023; Shea, Richardson, and Swan 2022). Additionally, engaging students in more cognitively demanding ways, such as creating, analyzing, or developing information, is especially important to achieve higher-level learning outcomes (Vargas et al. 2024). Therefore, if we can strategically find ways to use digital technologies grounded in adult learning theory, we can strengthen the learning experience, enhance learning outcomes, and bridge the gap between theory and content delivery.
Many digital technologies and platforms exist and are met with opportunities and challenges. Time constraints and cost are common barriers (Borte, Nesje, and Lillejord 2023); therefore, free tools with relatively low preparation are prioritized. The Microsoft Collaborative Whiteboard app, which can be integrated within a Microsoft Teams Meeting space as a screen share, allows students to edit a document or template for more naturalistic collaborations simultaneously. Because it can be integrated into the meeting space, students do not need to download an app, leave the meeting, or switch to a new browser to participate. The form can be prepared before a lesson to allow instructors flexibility in their role and level of scaffolding.
Real-World Examples
A collaborative whiteboard can be applied to various programs, topic areas, or overall aims, and it may also be used as a formative assessment of content delivery. The examples included in this article were used in two separate synchronous occupational therapy courses, one focused on a neuroscience recap of the cranial nerves and one on adapting a therapeutic activity for different levels of traumatic brain injury rehabilitation. Pay special attention to the variance between the levels of learning targeted and how this is reflected in the collaborative efforts of the students.
Basic-level Whiteboard
This synchronous whiteboard was used at the beginning of the class session to serve as a formative assessment of the understanding of basic concepts of content. The 24 students were asked to match the function’s cranial nerve name, number, and a representative emoji. This was a more simplistic board that allowed students to reorganize the information presented at a basic knowledge-attainment level collaboratively. It was helpful as a formative assessment to adapt the rest of the lesson according to the students’ level of understanding, and their pace and accuracy of completion. In this case, the students moved their pointers to arrange the information simultaneously, with only a few attempts to collaborate verbally. This activity provided a collaborative effort to complete a joint, goal-oriented task.
Before
After
Advanced-level Whiteboard
The synchronous whiteboard in this example was created for a higher level of application-based learning, where 11 students were asked to alter components of an occupation and treatment modality, baking cookies, to apply to each unique level of traumatic brain injury recovery. This activity demanded greater levels of problem-solving and, therefore, greater levels of collaboration within the class. Many discussions and opportunities for problem-solving and aims for collective approval arose. To close the learning activity, I led a group debrief to discuss each level, provided immediate feedback, and facilitated discussions transferable to other real-world applications. In this example, the directions are in the middle, and the students filled in the sticky notes around the perimeter.
Make it Meaningful
We need buy-in and motivation in higher education. This begins with identifying real-world challenges and ends with reflection. The strategic use of digital technologies, specifically when combined with theoretical reasoning and adult learning principles, can improve classroom experiences through a greater sense of community while targeting higher levels of learning. If we are intentional with this design, educators can increase student engagement and facilitate higher levels of learning in synchronous online classrooms. Sprinkle goal-oriented and real-world relevant material on top, and you have a recipe for a meaningful outcome.
Looking Ahead
With institutions more widely embracing long-term implementation of distance education programs, faculty require ongoing support, training, and access to resources for sustained confidence and success. Dissemination of practical examples of use by peer educators and texts outlining an approach to identify and develop materials will benefit ongoing efforts to increase competency. In other words, we must keep each other informed on educational tech-gems we find to collectively improve our system.
How to Whiteboard: Quick Start Guide
Open Microsoft Teams
Click on the Whiteboard tab in your meeting space
Upload an editable template or create a blank canvas
Once in your meeting, share your screen and select the whiteboard app
Scaffold your task with prompts and questions
Jaimee Fielder, OTD, OTR is an Assistant Professor of Occupational Therapy at the University of St. Augustine for Health Sciences. She earned her Master of Science in Occupational Therapy from Touro University Nevada, Post-Professional Doctorate in Occupational Therapy from Texas Woman’s University, and is now pursuing a Doctorate in Education from the University of St. Augustine for Health Sciences, with a dissertation focused on educational technology and active learning in higher education classrooms.
References
Barbetta, Patricia M. 2023. “Technologies as Tools to Increase Active Learning During Online Higher-Education Instruction.” Journal of Educational Technology Systems 51(3): 317–339.
Basilotta-Gomez-Pablos, Verónica, Matarranz, María, Casado-Aranda, Luis A., and Otto, Andreas. 2022. “Teachers’ Digital Competencies in Higher Education: A Systematic Literature Review.” International Journal of Educational Technology in Higher Education 19(8).
Børte, Kristi, Nesje, Kjersti, and Lillejord, Sølvi. 2023. “Barriers to Student Active Learning in Higher Education.” Teaching in Higher Education 28(3): 597–615.
Crompton, Helen, and Sykora, Christopher. 2021. “Developing Instructional Technology Standards for Educators: A Design-Based Research Study.” Computers and Education Open 2: 100044.
Shea, Peter, Richardson, Jennifer, and Swan, Karen. 2022. “Building Bridges to Advance the Community of Inquiry Framework for Online Learning.” Educational Psychologist 57(3): 148–161.
Vargas, Jesús H., Ojeda, Edison C. C., Zapata, Carlos A. C., Flores, Karen A. A., Vela, Juan A. H., and Espinoza, Yessenia E. D. 2024. “Analysis of Significant Learning in Higher Education: Usefulness of Fink’s Taxonomy: A Systematic Review.” Journal of International Crisis and Risk Communication Research 7(S7): 1341.
What if the AI tools we are trying to limit and caution against were actually essential (or beneficial) to enhancing the critical thinking skills we are afraid of losing? Since the 2023 proliferation of generative AI, faculty have been inundated with warnings that human intelligence is being eroded by AI. As a result, some faculty have adopted policies banning or severely limiting AI use driven by concerns of academic integrity and the fear of students bypassing essential learning.
However, banning AI will not prevent students from using it, whether for nefarious or appropriate purposes. Instead, it may deny students a chance to practice and engage with AI in an educational setting where they and faculty can explore its full potential collaboratively. This kind of restrictive thinking is based on two flawed assumptions: that AI cannot support student thinking and that students will only use AI to cheat. The challenge is not to police every use, but to reframe our approach from one of prohibition to one of collaborative partnership.
This shift in perspective allows faculty to systematically integrate AI into courses in a developmentally appropriate manner. By centering policies on learning, we can encourage students to take an active, self-regulated approach to their education. This reframes the focus from dishonesty to autonomous learning, emphasizing academic values while scaffolding meaningful assignments that challenge student thinking.
Integrating AI into the curriculum requires a developmental approach, much like teaching toddlers. Expecting a first-year student to rely on AI for essential skills development is like asking a toddler to color within the lines—it’s developmentally inappropriate. Instead, our policies should align with a student’s progression. In lower-level courses, the focus must be on foundational skill-building including learning how to use AI. For upper-level and graduate students, we can empower them to autonomously evaluate AI’s role in their learning and whether or not AI is developing or replacing learning. Meanwhile, mid-level courses can provide a scaffolded transition, with specific instructions on how and when to use AI.
It is important to consider students’ prior knowledge of AI as well. While many are comfortable using technology and AI, they may lack metacognitive awareness of how their use affects learning. Understanding students’ technology usage is crucial when designing courses. Ultimately, use your best judgment—some graduate courses may require a cautious approach, while entry-level courses might benefit from a more permissive policy, especially since students within the same course can have a wide range of AI abilities.
How to Integrate AI Developmentally into Your Courses
Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.
Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.
To introduce AI into your course, create a prompt that asks students to have a conversation with an AI about a concept you will be discussing. This anticipatory set can prime student thinking and encourage them to use AI in a conversational manner, moving beyond simply asking for answers. You can then discuss the AI’s responses—exploring bias, hallucinations, and the depth of its answers—which naturally leads to a conversation about crafting better prompts. This simple, ungraded exercise allows all students to participate, provides valuable practice, and serves a clear learning purpose.
Another example of a nongraded, purposeful use of AI is for providing feedback on learning. When students are writing papers, you can create custom AI agents to provide feedback on different parts of the writing process, from idea development to final submission. These agents can be designed to follow assignment criteria without writing any portion of the work. If students choose to use the agent, you can ask them to share their feedback conversations to assess the quality of the feedback and to write about how they incorporated it to develop their ideas—adding a crucial element of metacognition.
This method is also ideal for reinforcing and mastering skill development. For example, in a counseling course, students can practice articulating confidentiality and its limits to a fictional client created by AI. Once this foundational skill is mastered, the agent can be instructed to demonstrate signs of self-harm, allowing the student to practice assessing client safety and deciding whether to break confidentiality. As with the other examples, you can ask students to share their conversations for your feedback, have them critique their own performance, and provide a rationale for their approach. The agent itself can even provide feedback at the end of the session.
Finally, a higher level of AI integration is to have students create their own custom learning AI agent. By this point, students will have had multiple chances to improve their prompt writing, practice using AI for learning (not bypassing skills), and evaluate how AI supports their development. Creating a personalized study agent would be an ideal way for students to be active in their learning and assess the areas they need to develop. Faculty could provide guidelines and ask students to share how they are using AI positively.
How to Scaffold AI into Your Courses
Start with low-stakes, ungraded activities.
Then use AI to provide meaningful, real-time feedback.
Create opportunities for skill reinforcement and mastery.
Finally, empower students to become creators, not just users.
To truly prepare students for life after graduation, institutions and faculty must be willing to provide training on new technologies with a lens of learning. Faculty must move beyond fear and prohibition and engage directly with these tools. By exploring AI’s potential faculty can transform their teaching from a place of restriction to one of collaborative partnership.
By taking a development and scaffolded approach to AI implementation students can benefit from its potential. If this approach is embraced, the question is no longer “Should we allow AI in the classroom?” The more productive question becomes “How can we teach our students to become discerning and effective creators of a world shaped by AI?”
Michael Kiener, PhD, CRC, is a professor at Maryville University of St. Louis in their Clinical Mental Health Counseling program. For the past 10 years he has coordinated their Scholarship of Teaching and Learning Program, where faculty participate in a yearlong program with a goal of improved student learning. In 2012 and 2024 he received the Outstanding Faculty Award for faculty who best demonstrate excellence in the integration of teaching, scholarship and/or service. He has over thirty publications including a co-authored book on strength-based counseling and journal articles on career decision making, action research, counseling pedagogy, and active and dynamic learning strategies.
Parents who grew up in the ’80s and ’90s know the feeling: you’re listening to your kid’s playlist, and suddenly a song hits you with a wave of uncanny familiarity. Despite the claims by your teen that it is the latest and greatest, you know that it is just a repackaging of one of your favorite tunes from the past. I am noticing a similar trend with generative AI. It is inherently regurgitative: reshaping and repackaging ideas and thoughts that are already out there.
Fears abound as to the future of higher education due to the rise of generative AI. Articles from professors in many different fields predict that AI is going to destroy the college essay or even eliminate the need for professors altogether. Their fears are well founded. Seeing the advances that generative AI has made in just the past few months, I am constantly teetering between immense admiration and abject terror. My Chatbot does everything for me, from scheduling how to get my revise and resubmit done in three months to planning my wardrobe for the fall semester. I fear becoming too self-reliant on it. Am I losing myself? Am I turning my ChatGPT into a psychological crutch? And if I am having these thoughts, what effect is generative AI having on my students?
Remix vs. Originality: Girl Talk or Beyonce
Grappling with the strengths and weaknesses of my own AI usage, I feel I have discovered what might be the saving grace of humanity (feel free to nominate me for the Nobel Peace Prize if you wish). As I hinted earlier, AI is more like a DJ remixing the greatest hits of society rather than an innovative game changer. My ChatGPT is more like Girl Talk (who you have probably never heard of. Just ask your AI) than Beyonce (who you most definitely have heard of). Not that there’s anything wrong with Girl Talk. Their mashups are amazing and require a special kind of talent. Just like navigating AI usage requires a certain balance of skills to create a usable final product. But no matter how many pieces of music from other artists you mash together, you will not eventually turn into a groundbreaking, innovative musician. Think Pat Boone vs. The Beatles, Sha Na Na vs. David Bowie, Milli Vanilli vs. Prince, MC Hammer vs. Lauryn Hill.
What AI Gets Wrong in Writing
As a mathematician and a novelist, I see this glaring weakness in both of these very different disciplines. I’ll start with writing. ChatGPT is especially helpful in coming up with strange character or planet names for my science fiction novels. It will also help me create a disease or something else I need to drive the plot further. And, of course, it can help me find an errant comma or fix a fragmented sentence. But that is about it. If I ask it to write an entire chapter, for example, it will come up with the most boring, derivative, and bland excuse for prose I have ever seen. It will attempt my humor but fail miserably. It sometimes makes my stomach turn, it’s so bad.
A study from the Wharton School found that ChatGPT reduces the diversity of ideas in a pool of ideas. Thus, it diminishes the diversity of the overall output, narrowing the scope of novel ideas. Beyond that, I find that when I use ChatGPT to brainstorm, I typically don’t use its suggestions. Those suggestions just spark new ideas and help me come up with something different and more me.
For example, I asked ChatGPT to write a joke for its bad brainstorming practice of using the same core ideas over and over again. It said:
Joke: That’s not brainstorming—it’s a lazy mime troupe echoing each other.
That’s lame. I would never say that. But another joke it gave me sparked the music sampling analogy I opened this article with.
In any case, because of generative AI’s inability to actually generate anything new, I have hope that the college essay, like the fiction novel, will not die. Over-reliance on AI may indeed debilitate the essay, perhaps causing it to go on life support forcing students and faculty to drag its lifeless body across the finish line of graduation. But there is still hope.
I remember one of my favorite English teachers in middle school required that we keep a journal. Each day she asked us to write something, anything in our journal, even if it was only a paragraph or just a sentence. Something about putting pen to paper sparked my creativity. It also sparked a lifelong notebook addiction. And even though I consider myself somewhat of a techie and a huge AI enthusiast, to this day I still use notebooks for the first draft of my novels.
It is clear to me that ChatGPT will never be able to write my novels in my voice. I don’t claim to be a great novelist. I just feel that some of my greatest work hasn’t been written yet. While ChatGPT may be able to write a poem about aardvarks in the style of Robert Frost or a ballad about Evariste Galois in the style of Carole King, it can’t write my next novel, because it doesn’t yet exist. And even when it tries to imitate my voice and my style, predicting what I will write next, it does a poor job.
The Research Paper Dilemma: AI vs. Process
A research paper is inherently different from a creative work of fiction, however. ChatGPT does do a pretty good job of gathering information on a topic from several sources and synthesizing it into a coherent paper. You just have to make sure to check for the errant hallucinated reference. And honestly, when are our students ever going to be asked to write a 15-page research paper on Chaucer without any resources? And if they are, ChatGPT can probably produce that product better than an undergraduate student can. But the process, I would argue, is more important than the final product.
In his Inside Higher Ed paper Writing the Research Paper Slowly, JT Torres recommends a scaffolding process to writing the research paper. This method focuses on the process of writing a paper, exploring and reading sources, taking notes, organizing those notes into a ‘scientific story’ and creating an outline. Teaching students the process of writing the paper instead of focusing on the end product results in students feeling more confident that they can not only complete the task required but transfer those skills to another subject. Recognizing these limitations pushed me to rethink how I design assignments.
Using AI in the Classroom
Knowing that generative AI can do somethings (but not all things) better than a human has made me a more intentional professor. Now when I create assignments, I think: Can ChatGPT do this better than an undergraduate student? If so, then what am I really trying to teach? Here are a few strategies I use:
Method #1: Assess Your Assessments with AI in Mind
When designing an assignment, ask yourself whether it is testing a skill that AI already performs well. If so, consider shifting your focus to why that skill matters, or how students can go beyond AI’s capabilities.
Method #2: Use AI Where It Adds Value – Remove It Where It Does Not
In some cases, it makes sense to integrate AI directly into the assignment (e.g., generating code, automating data analysis). In others, the objective may be to build a human-only skill like personal expression or creative voice. I decide case by case whether AI should be a part of the process or explicitly excluded.
Method #3: Clarify Whether You are Teaching Theory or Application
When I am teaching tests, I have to ask myself: Am I assessing whether students understand the theory behind the test or whether they can run one using software? If it’s the latter, using AI to generate code might be appropriate. But if it’s the former, I’ll require manual calculations or a written explanation.
Method #4: Add a Reflection to any AI Supported Assignment
Any assignment where they are allowed to use AI, they also have to write a reflection about how they used AI and whether it was helpful or not. This encourages metacognition and reduces overreliance.
Method #5: Require Students to Share Their Prompts and Revisions
Having students share the prompts they used in completing the assignment teaches them about transparency and the need for iteration in their interaction with an AI. Students should not just be cutting and pasting the first response from ChatGPT. They need to learn how to take a response, analyze it then refine their prompt to get a better result. This helps them develop prompt engineering skills and realize that ChatGPT is not just a magic answer machine.
AI and the Limits of Innovation in Research
What about academic research in general? How is AI helping or hindering? Given that generative AI merely remixes the greatest hits of human history rather than creating anything new, I think its role in academic research is limited. Academic breakthroughs start with unasked questions. Generative AI works within the confines of existing data. It can’t sense the frontier because it doesn’t know there is a frontier. It can’t sample past answers of a question that hasn’t been asked yet. About a year ago, I was trying to get my AI to write a section of code for my research and it kept failing. I spent a week trying to get it to do what I wanted. I realized it was having such a difficult time because I was asking it to do something that hadn’t been done before. Finally, I gave up and wrote the piece of code myself, and it only took me about half an hour. Sure, the coding capabilities have gotten better over the past year, but the core principle remains the same. AI still struggles to innovate. It can’t do what hasn’t already been done. Also because of ‘creative flattery’ it wants to make you happy so it will try to do what you tell it to do even if it can’t. The product will be super convincing, but it can still be wrong.
I recently asked AI to write a theoretical proof that shows polygonal numbers are Benford distributed (Spoiler: They are not). Then I had it help me write a convincing journal-ready article. The only problem is it also wrote me a theoretical proof that Polygonal numbers are NOT Benford distributed as well. I submitted the former to a leading mathematics journal to see what would happen. Guess what, they caught it. A human was able to detect the ‘AI Slop’. This shows me, that (1) there will always be a need for human gatekeepers and (2) ‘creative flattery’ is extremely dangerous in a research setting and confirms the need for human review. The chatbot tries too hard to please, thus reinforcing what the user already thinks even if that means proving or disproving the exact same thing. Academic research thrives on novel questions and unpredictable answers, which AI is incapable of doing since it inherently just regurgitates what is already out there.
Helping Students See AI’s Blind Spots
The Benford Polygonal Numbers experiment is an important example of how we need to educate our students about AI usage in an academic setting. The Time.com article Why A.I. is Getting Less Reliable, Not More states that despite its progress over the years, AI can still resemble sophisticated misinformation machines. Students need to know how to navigate this.
One of my favorite assignments in my Statistics course is what I call:
Method #6: “Beat ChatGPT” – A Concept Mastery Challenge
Students must craft a statistics question that the chatbot gets wrong, explain why the chatbot got it wrong and then provide the correct answer. A tweak of this activity would be to take AI generated content and human written then compare and critique tone, clarity, or originality.
Remixing isn’t Creation
AI-generated content is like a song built entirely from remixed samples. Sampling has its place in music (and in writing) but when everything starts to sound the same, our ears and brains begin to tune out. A great remix can breathe new life into a classic, but we still crave the shock of the new. This is why people lost their minds the first time they heard Beyonce’s Lemonade or Kendrick Lamar’s To Pimp a Butterfly – not because they followed a formula, but because they bent the rules and made something we’d never heard before. AI, for all its value, doesn’t break the rules. It follows them. That is the difference between innovation and imitation. It is also the reason why AI, in its current capacity, will not kill original thought.
Sybil Prince Nelson, PhD, is an assistant professor of mathematics and data science at Washington and Lee University, where she also serves as the institution’s inaugural AI Fellow. She holds a PhD in Biostatistics and has over two decades of teaching experience at both the high school and college levels. She is also a published fiction author under the names Sybil Nelson and Leslie DuBois.
Meincke, Lea, Gideon Nave, and Christian Terwiesch. 2025. “ChatGPT Decreases Idea Diversity in Brainstorming.” Nature Human Behaviour 9: 1107–1109. https://doi.org/10.1038/s41562-025-02173-x.
Artificial intelligence (AI) platforms, especially chatbots like ChatGPT and Gemini, are influencing many areas of higher education. Students and instructors can interact with these tools to get real-time, personalized help with nearly any academic task at hand, including helping students study or revamping course content for instructors. There is no question that the educational potential is significant, but so too are the concerns about academic integrity and the consequences of students relying too heavily on these tools. While it’s important to weigh the pros and cons of chatbots, using the right strategies and technologies can help avoid those issues and still allow faculty and students to benefit from this burgeoning technology.
Use Cases for Chatbots in Higher Education Teaching
First, let’s consider uses among faculty. They can use chatbots to help improve most areas of teaching and learning while still saving time.
Chatbots can assist with different phases of course design, including writing learning objectives and developing and repurposing content. Chatbots can also be used in evaluating student performance data if your institution offers a secure chatbot (just be sure to anonymize the data). Instructors can also use chatbots to review the rules and instructions for their assignments and tests to identify vague or subjective text and suggest clearer, more objective wording.
But one of the most impactful uses is adapting course content into formats that support students with different learning needs. Chatbots can adapt, simplify and organize text for students with learning or cognitive disabilities, generate alternative text for multimedia content, and they can even adjust their responses based on student emotions (for example, offering words of encouragement when a student is frustrated or discouraged).
Instructors may be tempted to use chatbots to grade written coursework as well, but for now, it is important to note that the technology still struggles with the nuances of grading and feedback. However, chatbots can still help by pulling out the main ideas and revealing whether a student’s essay meets basic criteria, which gives instructors a clearer starting point before reading.
You’ll see a lot of jargon and articles that make prompt writing seem complicated, but you don’t need to be an expert to write effective prompts. It’s a skill you can pick up quickly after a few tries, especially once you find what works for you and start creating your own templates to use in your courses.
Students and Chatbots: Finding a Balance
There are many ways that students can use chatbots to support learning while steering clear of misusing them for tests and assignments. Ensuring ethical and beneficial adoption of AI requires clear strategies and guidelines along with the right technology.
Improving Academic Performance
Recent research indicates that chatbots can improve learning and academic outcomes, but balance is key. When students rely on them too heavily, any benefits, like increased engagement, motivation and reflection are reduced or eliminated. When used appropriately, AI chatbots can tutor students to help them better understand and comprehend the information, which also increases engagement. Chatbots give students instant, personalized help with tasks such as summarizing content and checking grammar, and they also create a nonjudgmental space where students feel more comfortable asking questions.
Cognitive Load Reduction and Stress Management
Chatbots are ideal for offloading tasks such as summarizing a lecture transcript or highlighting key points in an assignment. With students under pressure to complete assignments, leaning on AI can also reduce stress and anxiety by giving them more time to focus on the important elements of an assignment.
Similarly, chatbots can help students visualize information by generating charts, graphs or images to support their ideas. They can also provide adaptive support for students with disabilities or language barriers by reading, translating, simplifying or reformatting content to meet their needs. This reduces cognitive load, allows students to focus on the assignment itself and builds confidence in their ability to complete the work.
Strategies for Reducing Chatbot Misuse
Because of the speed and ease of use for chatbots, students may begin to use them as a shortcut to get work done instead of a supplemental learning tool. That’s a big issue. It bypasses critical thinking, contextual understanding and collaboration. This not only cheats the student out of learning but creates academic integrity issues for higher education institutions to address and stay ahead of.
Cheating is common in higher education, and with recent surveys indicating that more than 85% of students use AI daily to help with schoolwork, there’s a good chance of overlap. Students have mixed opinions when it comes to using AI for homework, with around 40% of students acknowledging that using AI for research and writing assistance should be acceptable, but there should also be limitations and ethical use.
Communicating with students about academic integrity codes of conduct, AI use, and cheating policies and ramifications are the first steps in setting expectations. But delivering assignments in ways that reduce opportunities for the use of chatbots or that lay out specific guardrails for chatbot use can also be helpful.
Reducing Chatbot Cheating
Cheating in higher education will never completely go away, and efforts to maintain and manage tools and channels used for cheating must be ongoing. As more students leverage chatbots for both approved and unapproved use, educational institutions can work to help students understand the appropriate way to leverage technology within the learning environment.
While educators struggle to detect AI use reliably, AI detection tools can identify unedited chatbot-generated content with reasonable accuracy. But their effectiveness drops when students edit or modify responses or use AI paraphrasing tools to rewrite the content. They’re more useful as a gut check, not definitive proof that the text was AI-generated.
Remote proctoring during assessments and written assignments adds another barrier to cheating. Ideally, the proctoring solution should leverage AI test monitoring together with live human proctors, which gives faculty the control to prevent AI use and the flexibility to allow approved resources. For example, instructors can proctor exams or essays by restricting access to all unauthorized websites and software, including chatbots, while still providing access to specific materials like case studies or tools like Word, Excel and Google Docs.
Another option is to use scaffolded, realistic assignments and assessments that focus on real-world application and requiring students to connect course content to their personal experiences or context. This makes it more difficult for chatbots to generate accurate or meaningful responses. For example, assessments could ask students to develop a scenario based on a local community issue, work on collaborative projects that include peer reviews or create video responses that reference specific class lectures or materials. This approach helps evaluate students’ true understanding of materials and that they are developing practical skills.
Set the Stage Now for an AI-Inclusive Future
AI chatbots are powerful tools that can make education more personalized, accessible and engaging. However, their misuse can undermine academic integrity and dilute learning. The solution, in most cases, isn’t to ban them outright, but to integrate them responsibly throughout teaching and learning processes. Striking this balance isn’t always easy, but it’s necessary to preserve the value of learning while preparing students for a future where AI will almost certainly be a part of their work.
Tyler Stike is the Director of Content at Honorlock and a doctoral student in educational technology at the University of Florida. In his role at Honorlock, he develops a wide range of content on online education, assessment, and accessibility. He is interested in how affective states influence learning and performance, and plans to research how AI can support adaptive learning experiences that help students manage those states.
References
W. Dai et al., “Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT,” 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA, 2023, pp. 323-325, doi: 10.1109/ICALT58122.2023.00100.
Sánchez-Vera, Fulgencio. 2025. “Subject-Specialized Chatbot in Higher Education as a Tutor for Autonomous Exam Preparation: Analysis of the Impact on Academic Performance and Students’ Perception of Its Usefulness” Education Sciences 15, no. 1: 26. https://doi.org/10.3390/educsci15010026
Kofinas, A. K., Tsay, C.-H., & Pike, D. (2025). The impact of generative AI on academic integrity of authentic assessments within a higher education context. British Journal of Educational Technology, 00, 1–28. https://doi.org/10.1111/bjet
Are you one of the reported 61% of higher education faculty now using AI in your teaching (Weaver, 2025)? A recent survey by the Digital Education Council (2025) found that 88% of AI-using faculty report minimal to moderate use. Further, 83% of faculty question students’ ability to evaluate AI-generated content, and 82% worry about student overreliance on AI tools.
So, while a majority of faculty are incorporating AI, many of us remain cautious about how to use it effectively in our higher education classrooms. This tension is further echoed in a recent 2025 EDUCAUSE AI Landscape Study, which reports 57% of schools, up from 49% last year, now identify AI as a “strategic priority” as they continue to adapt to the expanding impact of AI across teaching and learning (Robert & McCormack, 2025).
Our institutions want us to use AI in our classrooms, but how can we do this well? Research by Zhou and Peng (2025) found that AI supported instruction can enhance both student engagement and creativity, especially in creating personalized and collaborative learning experiences. Similarly, Walter (2024) found that training educators and students in prompt engineering and critical evaluation is a key component necessary to maximize AI’s potential while reducing risks of misuse and over reliance. To enhance our content, we need to think about how to use AI purposefully, training both ourselves and our students to engage with AI tools critically, creatively, and ethically.
This article examines how faculty can incorporate AI tools effectively into their disciplines, while guiding students to use AI to foster critical thinking and creative application. Drawing on my own research, it offers strategies to support thoughtful integration of AI into higher education classrooms, with a focus on ethical awareness and responsive instruction design.
What I Learned From Using AI in My Teaching
Over the past school year, I used AI as a tool in my undergraduate courses and found that students were not as adept at using AI as I had suspected. In fact, when I introduced AI as a required component of the course framework at the start of the semester, many students were uncertain how to proceed. Some shyly offered that they had used AI in courses previously, but many were hesitant, having been repeatedly warned that using AI could jeopardize their academic careers. Without explicit, scaffolded instruction, both students and faculty risk using AI superficially, missing its potential to meaningfully transform teaching and learning.
When AI Becomes the Assignment
In Spring 2025, I led a research project in my classes exploring how university students used AI tools, such as ChatGPT, to support iterative writing and refining complex tasks like lesson planning. The study emphasized ethical AI use and focused on prompt engineering techniques, including the use of voice-to-text, targeted revision, and staged feedback loops to improve idea generation, structure, and differentiation. I wanted students to engage in a critical evaluation of AI outputs, developing greater precision and agency in applying AI suggestions across drafting stages.
What I found was that students did not initially know how to talk to AI, rather they talked at it. At first, students did not get useful results because they were not tailoring their prompts enough. One student offered “I had to ask the same question 50 billion different ways to get the right answer.” What I discovered over those first few weeks was that students needed to learn to dialogue with AI in the right ways. They had to be intentional in what they were asking it and tailor their prompts specifically.
Try this instead:
Begin broad, then refine. Encourage students to start with a general idea, then narrow their prompts based on assignment goals and relevance of the AI’s output.
Promote iterative prompting. Teach students to revise their prompts by engaging in an ongoing process of dialoguing with AI, aimed at narrowing down their ideas. Author WonLee Hee (2025) offers the following framework: prompt, generate output, analyze, refine prompt, and repeat.
Why Prompting Is Worth Teaching
Students are using AI, but often without the skills to do so effectively—and that is where we come in. Poor prompting reinforces the very over-reliance that faculty fear, training students to accept whatever results AI delivers, rather than critically questioning them. When prompts are vague or generic, the results are too.
Students need specific instruction on how to prompt AI effectively. In my classes I used a structured, multi-step process that students followed each week. However, after reviewing student feedback and surveys, I realized that the process involved too many steps. If I wanted my students to use AI meaningfully beyond my course, I would need to refine and simplify the approach.
Try this instead:
Incorporate guided practice. Use a consistent AI tool at the start of the semester (I used ChatGPT) and model effective prompting and revision to help students build foundational skills.
Gradually increase student choice. After the initial learning phase, allow students to mix and match AI tools to personalize the process and deepen their engagement.
Embed critical reflection. Encourage studentsto treat AI as a thinking partner, not an all-knowing source. Design assignments so that they require ongoing interaction with AI (Gonsalves, 2024), such as using AI to generate counterarguments to their own essays or applying math concepts to real-world problems to identify gaps or misunderstandings in their thinking.
A Simple Framework for Better Prompts
A simple, three-phased framework will be more user friendly.
Explore: Encourage students to begin by collecting and thinking through wide-ranging ideas. Start with speech-to-text to brainstorm. Then narrow the focus, identify gaps, and use AI to help fill them.
Refine: Have students evaluate the AI outputs and add specific details to further improve clarity, accuracy, and relevance.
Revise: Use AI to check if ideas have been clearly communicated. This type of editing involves more than fixing grammar, it is about making sure that their message is clear, focused, and appropriate for the audience.
What Changed for Students
When I incorporated these changes, I saw that my students became more strategic thinkers and were less likely to merely copy from AI. In fact, over 73% of my study participants noted that they stopped accepting AI’s first response and began asking better follow-up questions, indicating that they were dialoguing with AI rather than just copying from it. Repeated practice helped them yield more accurate AI generated support and emphasized their importance in the process. They came to view AI as a support tool not a substitute for their own ideas. At the end of the study, one student noted “You have to be very specific… I have learned how to tweak my prompt to get the result I want.” Another, stated that “I started editing ChatGPT instead of letting it write for me.” These responses indicated a key shift: better prompting had reframed AI as a collaborator, not a crutch.
Final Thoughts
Teaching students how to create effective prompts is not about using technology, it is about teaching them to craft better questions. This practice reinforces critical thinking skills so many of us aim to develop in our disciplines. When students learn how to guide AI, they are also learning how to refine their own thinking. Encouraging reflection throughout the process fosters metacognition; by regularly engaging in this type of analysis of their decisions and ideas, students become more thoughtful, independent learners. By intentionally incorporating AI tools into our coursework, we are reducing the temptation for misuse and overreliance, creating space for more ethical and transparent use in our higher education classrooms.
AI Disclosure: This article reflects collaboration between the human author and OpenAI’s ChatGPT-4 for light editing. All ideas, examples, and interpretations are the author’s own.
Lisa Delgado Brown, PhD, is a current Assistant Professor of Education at The University of Tampa and the former Middle/Secondary Program Administrator at Saint Leo University where she also served on the Academic Standards Committee. Dr. Delgado Brown teaches literacy courses with a focus on differentiation in the general education classroom.
References
Gonsalves, C. (2024). Generative AI’s Impact on Critical Thinking: Revisiting Bloom’s Taxonomy. Journal of Marketing Education, 0(0). https://doi.org/10.1177/02734753241305980
Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, 15. https://doi.org/10.1186/s41239-024-00448-3
Zhou, M., & Peng, S. (2025). The usage of AI in teaching and students’ creativity: The mediating role of learning engagement and the moderating role of AI literacy. Behavioral Sciences, 15(5), 587. https://doi.org/10.3390/bs15050587
The rapid adoption and development of AI has rocked higher education and thrown into doubt many students’ career plans and as many professors’ lesson plans. The best and only response is for students to develop capabilities that can never be authentically replicated by AI because they are uniquely human. Only humans have flesh and blood bodies. And these bodies are implicated in a wide range of Uniquely Human Capacities (UHCs), such as intuition, ethics, compassion, and storytelling. Students and educators should reallocate time and resources from AI-replaceable technical skills like coding and calculating to developing UHCs and AI skills.
Adoption of AI by employers is increasing while expectations for AI-savvy job candidates are rising. College students are getting nervous. 51% are second guessing their career choice and 39% worry that their job could be replaced by AI, according to Cengage Group’s 2024 Graduate Employability Report. Recently, I heard a student at an on-campus Literacy AI event ask an OpenAI representative if she should drop her efforts to be a web designer. (The representative’s response: spend less time learning the nuts and bolts of coding, and more time learning how to interpret and translate client goals into design plans.)
At the same time, AI capabilities are improving quickly. Recent frontier models have added “deep research” (web search and retrieval) and “reasoning” (multi-step thinking) capabilities. Both produce better, more comprehensive, accurate and thoughtful results, performing broader searches and developing responses step-by-step. Leading models are beginning to offer agentic features, which can do work for us, such as coding, independently. American AI companies are investing hundreds of billions in a race to develop Artificial General Intelligence (AGI). This is a poorly defined state of the technology where AI can perform at least as well as humans in virtually any economically valuable cognitive task. It can act autonomously, learn, plan, and adapt, and interact with the world in a general flexible way, much as humans do. Some experts suggest we may reach this point by 2030, although others have a longer timeline.
Hard skills that may be among the first to be replaced are those that AI can do better, cheaper, and faster. As a general-purpose tool, AI can already perform basic coding, data analysis, administrative, routine bookkeeping and accounting, and illustration tasks that previously required specialized tools and experience. I have my own mind-blowing “vibe-coding” experience, creating custom apps with limited syntactical coding understanding. AIs are capable of quantitative, statistical, and textual analysis that might have required Excel or R in the past. According to Deloitte, AI initiatives are touching virtually every aspect of a companies’ business, affecting IT, operations, marketing the most. AI can create presentations driven by natural language that make manual PowerPoint drafting skills less essential.
Humans’ Future-Proof Strategy
How should students, faculty and staff respond to the breathtaking pace of change and profound uncertainties about the future of labor markets? The OpenAI representative was right: reallocation of time and resources from easily automatable skills to those that only humans with bodies can do. Let us spend less time teaching and learning skills that are likely to be automated soon.
Technical Skills OUT
Uniquely Human Capacities IN
Basic coding
Mindfulness, empathy, and compassion
Data entry and bookkeeping
Ethical judgment, meaning making, and critical thinking
Mastery of single-purpose software (e.g., PowerPoint, Excel, accounting apps)
Authentic and ethical use of generative and other kinds of AI to augment UHCs
Instead, students (and everyone) should focus on developing Uniquely Human Capacities (UHCs). These are abilities that only humans can authentically perform because they need a human body. For example, intuition is our inarticulable and immediate knowledge that we know somatically, in our gut. It is how we empathize, show compassion, evaluate morality, listen and speak, love, appreciate and create beauty, play, collaborate, tell stories, find inspiration and insight, engage our curiosity, and emote. It is how we engage with the deep questions of life and ask the really important questions.
According to Gholdy Muhammad in Unearthing Joy, a reduced emphasis on skills can improve equity by creating space to focus on students’ individual needs. She argues that standards and pedagogies need to also reflect “identity, intellectualism, criticality, and joy.” These four dimensions help “contextualize skills and give students ways to connect them to the real world and their lives.”
The National Association of Colleges and Employers has created a list of eight career readiness competencies that employers say are necessary for career success. Take a look at the list below and you will see that seven of the eight are UHCs. The eighth, technology, underlines the need for students and their educators to understand and use AI effectively and authentically.
For example, an entry-level finance employee who has developed their UHCs will be able to nimbly respond to changing market conditions, interpret the intentions of managers and clients, and translate these into effective analysis and creative solutions. They will use AI tools to augment their work, adding greater value with less training and oversight.
Widen Humans’ Comparative Advantage
As demonstrated in the example above, our UHCs are humans’ unfair advantage over AI. How do we develop them, ensuring the employability and self-actualization of students and all humans?
The foundation is mindfulness. Mindfulness is about being fully present with ourselves and others, and accepting, primarily via bodily sensations, without judgment and preference. It allows us to accurately perceive reality, including our natural intuitive connection with other humans, a connection AI cannot share. Mindfulness can be developed during and beyond meditation, moments of stillness devoted to mindfulness. Mindfulness practice has been shown to improve self-knowledge, set career goals, and improve creativity.
Mindfulness supports intuitive thinking and metacognition, our ability to think clearly about thinking. Non-conceptual thinking, using our whole bodies, entails developing our intuition and a growth mindset. The latter is about recognizing that we are all works in progress, where learning is the product of careful risk-taking, learning from errors, supported by other humans.
These practices support deep, honest, authentic engagement with other humans of all types. (These are not available over social media.) For students, this is about engaging with each other in class, study groups, clubs, and elsewhere on campus, as well as engaging with faculty in class and office hours. Such engagement with humans can feel unfamiliar and awkward as we emerge from a pandemic. However, these interactions are a critical way to practice and improve our UHCs.
Literature and cinema are ways to engage with and develop empathy and understanding of humans you do not know, may not even be alive or even exist at all. Fiction is maybe the only way to experience in the first person what a stranger is thinking and feeling.
Indeed, every interaction with the world is an opportunity to practice those Uniquely Human Capacities (UHCs):
Use your imagination and creativity to solve a math problem.
Format your spreadsheet or presentation or essay so that it is beautiful.
Get in touch with the feelings that arise when faced with a challenging task.
Many students tell me they are in college to better support and care for family. As you do the work, let yourself experience as an act of love for them.
AI Can Help Us Be Better Humans
AI usage can dull our UHCs or sharpen them. Use AI to challenge us to improve our work, not to provide short cuts that make our work average, boring, or worse. Ethan Mollick (2024) describes the familiar roles AIs can profitably play in our lives. Chief among these is as a patient, always available, if sometimes unreliable tutor. A tutor will give us helpful and critical feedback and hints but never the answers. A tutor will not do our work for us. A tutor will suggest alternative strategies and we can instruct them to nudge us to check on our emotions, physical sensations and moral dimensions of our work. When we prompt AI for help, we should explicitly give it the role of a tutor or editor (as I did with Claude for this article).
How do we assess whether we and our students are developing their UHCs? We can develop personal and work portfolios that tell the stories of connections, insights, and benefits to society we have made. We can get honest testimonials of trusted human partners and engage in critical yet self-compassionate introspection, and journalling. Deliberate practice with feedback in real life and role-playing scenarios can all be valuable. One thing that will not work as well: traditional grades and quantitative measures. After all, humanity cannot be measured.
In a future where AI or AGI assumes the more rote and mechanical aspects of work, we humans are freed to build their UHCs, to become more fully human. An optimistic scenario!
What Could Go Wrong?
The huge, profit-seeking transnational corporations that control AI may soon feel greater pressure to show a return on enormous investment to investors. This could cause costs for users to go up, widening the capabilities gap between those with means and the rest. It could also result in Balkanized AI, where each model is embedded with political, social, and other biases that appeal to different demographics. We see this beginning with Claude, prioritizing safety, and Grok, built to emphasize free expression.
In addition, AI could get good enough at faking empathy, morality, intuition, sense making, and other UHCs. In a competitive, winner-take-all economy with even less government regulation and leakier safety net, companies may aggressively reduce hiring at entry level and of (expensive) high performers. Many of the job functions of the former can be most easily replaced by AI. Mid-level professionals can use AI to perform at a higher level.
Finally, and this is not an exhaustive list: Students and all of us may succumb to the temptation of using AI short cut their work, slowing or reversing development of critical thinking, analytical skills, and subject matter expertise. The tech industry has perfected, over twenty years, the science of making our devices virtually impossible to put down, so that we are “hooked.”
Keeping Humans First
The best way to reduce the risks posed by AI-driven change is to develop our students’ Uniquely Human Capacities while actively engaging policymakers and administrators to ensure a just transition. This enhances the unique value of flesh-and-blood humans in the workforce and society. Educators across disciplines should identify lower value-added activities vulnerable to automation and reorient curricula toward nurturing UHCs. This will foster not only employability but also personal growth, meaningful connection, and equity.
Even in the most challenging scenarios, we are unlikely to regret investing in our humanity. Beyond being well-employed, what could be more rewarding than becoming more fully actualized, compassionate, and connected beings? By developing our intuitions, morality, and bonds with others and the natural world, we open lifelong pathways to growth, fulfillment, and purpose. In doing so, we build lives and communities resilient to change, rich in meaning, and true to what it means to be human.
The article represents my opinions only, not necessarily those of the Borough of Manhattan Community College or CUNY.
Brett Whysel is a lecturer in finance and decision-making at the Borough of Manhattan Community College, CUNY, where he integrates mindfulness, behavioral science, generative AI, and career readiness into his teaching. He has written for Faculty Focus, Forbes, and The Decision Lab. He is also the co-founder of Decision Fish LLC, where he develops tools to support financial wellness and housing counselors. He regularly presents on mindfulness and metacognition in the classroom and is the author of the Effortless Mindfulness Toolkit, an open resource for educators published on CUNY Academic Works. Prior to teaching, he spent nearly 30 years in investment banking. He holds an M.A. in Philosophy from Columbia University and a B.S. in Managerial Economics and French from Carnegie Mellon University.
Every new technology brings with it a moment of reckoning and a lot of noise. Higher education has always had its share of both. We’re good at asking questions, kicking the tires, holding things at arm’s length until we’re sure it’s worth leaning in. But AI hasn’t given us that luxury. It arrived fast, and it arrived everywhere. And so here we are: adapting syllabi, revisiting assessments, trying to imagine what teaching looks like when the work of thinking and writing and making can now be shared with a machine.
It’s easy to feel like we’re supposed to become AI experts overnight, or like our value is being called into question. But I don’t think either is true. The real challenge, the real opportunity, is to understand how this new partner might show up in the work we already do. The intellectual work. The teaching work. The deeply human work.
That starts by asking better questions – not just “What can AI do?” but “What kind of thinking does good teaching really require?” If we can name that, we can start to see where AI fits and where it doesn’t.
Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, offered a helpful frame for understanding AI’s potential. He identifies five key cognitive tasks that define knowledge work:perceiving, understanding, reasoning, executing, and creating. I think this framework can be translated to the world of higher ed, and the work that faculty do. Because whether we’re designing curriculum, guiding discussion, mentoring students, or shaping institutional strategy, we’re doing some blend of those five things. And by looking closely at how they show up in our work, we can start to imagine how AI might support, not replace, the best of what we do.
1. Perceiving
Perceiving is about seeing what’s really there – what’s in front of us, and what might be hidden underneath. It’s the first move of any good teacher or designer: noticing. Noticing what students understand and what they don’t. Noticing patterns in discussion boards, in assignment uploads, in the quiet absence of a student who was once engaged. Perception is where reflection begins.
AI can help here by extending human observation. Imagine tools that model thousands of student submissions and flag potential misunderstandings. Or dashboards that surface patterns in feedback across multiple course sections. Or sentiment analysis that gives faculty a pulse on how students are responding to a unit in real time. These aren’t just speculative. Georgia State University’s implementation of predictive analytics has significantly improved student outcomes, especially for underrepresented groups (Dimeo, 2017).
And in my own work, I’ve seen how AI-powered tagging and clustering can help make sense of the digital exhaust students leave behind. During a review of some end of course survey responses, I used a language model to surface common themes in open-ended student responses. What might have taken hours of coding was compressed to minutes, giving me more time to focus on what really matters: how to respond, how to improve, how to connect.
When we talk about perceiving, we’re really talking about attention. AI can expand the reach of our attention—but it’s still up to us to decide where to look, and what to do with what we find.
2. Understanding
Understanding sits at the core of what faculty do. Whether we’re preparing to teach a new course or guiding a student through their first research project or writing up our own research, we’re spending time interpreting. This kind of work takes time and attention, a willingness to sit with uncertainty. And it’s where AI, when used carefully and with intention, can help.
In my own experience, I’ve used generative tools to scan large sets of institutional policy documents to better understand how decisions are communicated, and where inconsistencies emerge. What would have taken a full afternoon of toggling between tabs and highlighting paragraphs became a manageable, interpretable task, one that still needed my judgment, but got me there faster.
When AI can support us in making sense of large volumes of information, summarizing texts, comparing perspectives, identifying patterns, we free ourselves up for the more valuable intellectual work: asking better questions, and spending more time with the answers that matter. It’s about capacity.
Recent studies point to this as a growing area of impact. In a 2023 EDUCAUSE report, researchers note that AI’s ability to “curate and synthesize complex information” has emerged as a top priority for institutions looking to support both faculty productivity and student success (Pelletier et al., 2023). That doesn’t mean outsourcing the work of understanding, but it might mean sharing the load.
As Spataro puts it, these systems can “interpret, analyze, and generate vast amounts of text data,” but it’s up to us to bring interpretation, context, and care (2025). Used well, they don’t replace the act of understanding but they expand the space we have to do it well.
3. Reasoning
Tools that can break a complex task into parts, hold multiple threads in play, and adjust course as they go? That’s thinking and reasoning, and it can be put to work for us. Imagine planning a new course. You’re juggling student needs, institutional requirements, disciplinary content, pedagogical practices, assessment design, and accessibility considerations. AI can now meaningfully assist in that process by helping you reason your way through the options. I’ve used it to test weekly structures, re-sequence modules, generate alternate assessments keyed to different learning outcomes. It’s not always right. But it’s responsive.
This is what Spataro points to when he describes reasoning models’ capacity to navigate multistep challenges. And it’s what others are beginning to explore too. The 2024 Stanford Institute for Human-Centered AI report notes that models like GPT-4 are now outperforming the average human on tasks like LSAT logical reasoning questions, tasks that require inference, not recall (Stanford HAI, 2024). It’s not just that AI can make suggestions. It can anticipate consequences. It can debug your logic. It can help you think.
But it doesn’t replace the thinking. That means we need to stay in the loop. Because just like a teaching assistant who works fast but occasionally misses the nuance, these tools need supervision. The real value is in the collaboration. You bring the goals, the context, the judgment. The system brings the speed, the range, and the willingness to try again.
4. Executing
One of the most immediate shifts many faculty feel with these tools is in execution. Not in some futuristic, sci-fi sense, but in the simplest, most grounded way: things just get done faster. And not just routine things. Writing the first draft of an announcement. Reformatting a rubric. Creating a visual from a block of text. Summarizing student feedback across discussion boards. These are tasks that used to chip away at your time, that required a certain kind of attention and structure you didn’t always have at the end of the day. Now they can happen in seconds. Not perfect, but done. Or at least started, ready for you to refine, revise, and finalize.
In my own work, this means I don’t get stalled as easily. If a meeting runs long and I lose the hour I had planned to draft a guidance doc for a new course design initiative, I don’t start from zero later. I sketch the intent in a few lines, and the system scaffolds a first version. I get to come in as editor, refining and recentering. And yes, sometimes rejecting and starting over.
What’s changed isn’t just speed. It’s how close we can get from idea to action without needing to switch tools, start a new doc, find the right template. Execution becomes lighter. It gets folded into the flow. And for faculty navigating a day that might include grading, advising, committee work, and prepping for class, that lightness matters. But we should be careful here. The goal isn’t to turn every task into a race to the bottom. The speed is a gift only if we use the time it gives us well. Execution, in this new context, isn’t about doing more. It’s about clearing space to do what matters.
5. Creating
Creativity sits at the heart of so much work that faculty do. It’s how we see ourselves, not just as transmitters of knowledge, but as makers. We write, design, shape experiences. We revise courses to better fit the needs of a new cohort, craft discussion prompts that pull students deeper, build assignments that didn’t exist five years ago. Creativity is where our identities as scholars, teachers, and thinkers converge. So it’s no surprise that when people hear about AI “creating,” it sparks something between skepticism and alarm. And I get that.
But here’s where I land: this kind of creativity isn’t competition. These tools don’t originate like we do. They don’t generate ideas out of passion or lived experience. But they can be astonishingly good at offering sparks, those half-formed ideas, raw drafts, unexpected juxtapositions. In my own work, I’ve used them to draft module overviews that I later rewrite completely, but which help me see where I’m being too vague or too dense. I’ve used them to riff on potential assignment prompts, not to choose one blindly, but to scan for a new angle or a better tone. Sometimes, I reject it all. But I always walk away with more clarity about what I think.
That’s the shift: using the tools not to replace our voice, but to sharpen it. Not to outsource our thinking, but to reflect it back in new forms. Of course, this only works if we stay present in the process. If we hold fast to our criticality, our nuance, our sense of context. That’s the work. That’s the art. And as Spataro reminds us, the best ideas don’t care where they came from—they care what we do with them next (2024).
Why This Matters
I was working with a group of faculty from different disciplines demoing a few uses of generative tools in course development. We’d just finished a quick example, generating some low-stakes writing prompts for a discussion board. One person leaned back, arms crossed. “This is fine,” they said. “But the question is: what kind of teacher does this make me?”
It’s a great question. And I think the answer is: it makes you a teacher who’s choosing. Choosing how to spend your time and choosing where your expertise matters most. Choosing when to hand something off to the machine and, most importantly, choosing when to hold on tight because the human parts are the whole point.
That’s why I’ve stayed close to these five cognitive tasks. Because none of this matters unless we connect it to the real work we do. The knowledge work. The pedagogical labor. The thinking, the care, the creative decisions.
These five domains – perceiving, understanding, reasoning, executing, creating – aren’t abstract categories or corporate taxonomies. They’re a mirror of our everyday academic labor. They map how faculty prep a new course, evaluate student performance, write feedback, collaborate with colleagues, design new programs, interpret policy, serve on committees, apply for grants, rethink curriculum. This is what it means to work in higher ed. And these are the places where AI is entering.
So when we talk about adoption or training or integration, we’re not just talking about tools or workflows. We’re talking about how we think. How we value time. How we make meaning. And whether we can build systems, technological and human, that let us spend more of our energy on the parts of this job that matter most.
Dr. Nathan Pritts is Professor and Program Chair for First Year Writing at the University of Arizona Global Campus where he also serves as University Faculty Fellow for AI Strategy. He leads initiatives at the intersection of pedagogy, design, and emerging technologies and has spearheaded efforts in the strategic implementation of online learning tools, faculty training, and scalable interventions that support both educators and students. His work brings a humanistic lens to the integration of AI—balancing innovation with thoughtful pedagogy and student-centered design. As author and researcher, Dr. Pritts has published widely on topics including digital pedagogy, AI-enhanced curriculum design, assessment strategies, and the future of higher education.
Pelletier, K., Robert, J., Muscanell, N., McCormack, M., Reeves, J., Arbino, N., & Grajek, S., with Birdwell, T., Liu, D., Mandernach, J., Moore, A., Porcaro, A., Rutledge, R., & Zimmern, J. (2023). 2023 EDUCAUSE Horizon Report: Teaching and Learning Edition. EDUCAUSE.