

We have become accustomed to generative artificial intelligence in the past couple of years. That will not go away, but increasingly, it will serve in support of agents.
“Where generative AI creates, agentic AI acts.” That’s how my trusted assistant, Gemini 2.5 Pro deep research, describes the difference. By the way, I commonly use Gemini 2.5 Pro as one of my research tools, as I have in this column, however, it is I who writes the column.
Agents, unlike generative tools, create and perform multistep goals with minimal human supervision. The essential difference is found in its proactive nature. Rather than waiting for a specific, step-by-step command, agentic systems take a high-level objective and independently create and execute a plan to achieve that goal. This triggers a continuous, iterative workflow that is much like a cognitive loop. The typical agentic process involves six key steps, as described by Nvidia:
The LLM acts as the brain of the AI agent. It interprets the user’s prompt to understand the task requirements.
The planning module divides the task into specific actions.
The memory module ensures context is preserved for efficient task execution.
The agent core orchestrates external tools to complete each step.
Throughout the process, the agent applies reasoning to refine its workflow and enhance accuracy.
An early version of a general agent was released last week by OpenAI to their paid subscribers of ChatGPT. The message accompanying the release explains the potential for power and productivity as well as the care one must take to ensure privacy:
“ChatGPT agent allows ChatGPT to complete complex online tasks on your behalf. It seamlessly switches between reasoning and action—conducting in-depth research across public websites, uploaded files, and connected third-party sources (like email and document repositories), and performing actions such as filling out forms and editing spreadsheets—all while keeping you in control. To use ChatGPT agent, select ‘Agent mode’ from the tools menu or type /agent in the composer. Once enabled, just describe the task you’d like completed, and the agent will begin executing it. It will pause to request clarification or confirmation whenever needed. You can also interrupt the model at any time to provide additional instructions … When you sign ChatGPT agent into websites or enable connectors, it will be able to access sensitive data from those sources, such as emails, files, or account information. Additionally, it will be able to take actions as you on these sites, such as sharing files or modifying account settings. This can put your data and privacy at risk due to the existence of ‘prompt injection’ attacks online.”
I tried the new agent for an update on an ongoing research project I have been conducting this year. It was faster than the ChatGPT-o3 deep research product I have used previously. The report was more concise but included all the data I expected for my weekly update. It also condensed and formatted relevant material in tables. I was careful with the way in which I handled sharing personal information with the agent. Over time, I am confident that more secure ways will be found to protect users and their privacy.
Inherently, the agentic AI is different from the generative AI. Generative AI is like a brilliant but rather passive research assistant that requires constant, explicit direction. You must provide a series of precise, individual prompts to get it to complete your real objective. Agentic AI, on the other hand, functions more like an experienced project leader. You provide it with a high-level, strategic objective such as “Prepare a report for the provost that outlines the potential of offering a number of relevant new online AI certificate programs this fall targeted to large regional corporations.”
The agent then autonomously deconstructs this goal into a multistep workflow. It will search for relevant topics and targets, identify potential programs, compare and contrast current and potential offerings with those at competing institutions, generate a ROI over time analysis, synthesize the findings, draft the briefing document, access the provost’s calendar, identify available meeting times, and send a calendar invitation with the briefing attached.
That’s just one example. Agentic AI will be useful in many aspects of the university operation. It will promote efficiency, accuracy and save significant money through its round-the-clock productivity. Here are some key areas where agentic AI may be useful in the year ahead.
AI agents will offer the next level of artificial intelligence to higher education. We can anticipate embodied agents becoming available in a year or so. Meanwhile, I encourage us all to experiment with agentic AI as it becomes available. In doing so, we can begin to create our own personalized, proactive, professional assistant that can anticipate our needs and implement our preferences.
Who at your university is leading the move to agentic AI? Perhaps you may be in a position to model the efficiency and professionalism of AI agents.

You first met our game-changing GenAI-powered Student Assistant in August 2024, and we’ve been keeping you up to date on all of the exciting developments ever since. We’ve told you how it helps personalize your students’ learning experience on a whole new level with content that’s specific to your course textbook — but now we want to show you how.
Let’s dive in and explore some visual examples of student interactions that demonstrate its full capabilities.
Do your students ever get stuck on how to begin working on a question or topic? Using the Student Assistant, students can ask for a solid jumping-off point to get the ball rolling in the right direction. They can also ask it to clarify points of confusion, so they can successfully progress through an assignment.
The Student Assistant guides students to help them identify the correct answer, without giving it away, promoting the development of critical thinking skills and putting emphasis on self-reliance. Students are also discouraged from simply guessing a correct answer and are asked to explain their logic behind a selection.
If students are struggling to comprehend what they’re learning, they can ask for topics to be elaborated on, rephrased or broken down. They can also ask for brief definitions of key terms.
With the Student Assistant, students can ask for explanations of how topics they’re studying connect to real-world scenarios. It can generate discipline- and career-specific use-cases, helping students understand the relevancy of course content within the framework of their future careers.
Getting distracted during a task is something that can happen to the best of us, and students are no exception. If students ask to be shown external or entertaining web content, the Student Assistant will redirect and keep them focused on the assignment at hand. This tool will never provide or rely on external content.
The Student Assistant lets students know that it’s okay to struggle through an assignment by encouraging them with a positive, motivational tone. With positive reassurance from the Student Assistant, students can complete assignments with confidence.
When students aren’t making personal connections with course content, it can be easy for them to lose interest in the topic altogether. Students can ask for their course topics to be turned into an engaging story, helping them key into critical themes and ideas that they may have initially overlooked.

AI is becoming a bigger part of our daily lives, and students are already using it to support their learning. In fact, from our studies, 90% of faculty feel GenAI is going to play an increasingly important role in higher ed.
Embracing AI responsibly, with thoughtful innovation, can help students take charge of their educational journey. So, we turn to the insights and expertise of you and your students — to develop AI tools that support and empower learners, while maintaining ethical practices, accuracy and a focus on the human side of education.
Since we introduced the Student Assistant in August 2024, we continue to ensure that faculty, alongside students, play a central role in helping to train it.
Students work directly with the tool, having conversations. Instructors review these exchanges to ensure the Student Assistant is guiding students through a collaborative, critical thinking process —helping them find answers on their own, rather than directly providing them.
“I was extremely impressed with the training and evaluation process. The onboarding process was great, and the efforts taken by Cengage to ensure parity in the evaluation process was a good-faith sign of the quality and accuracy of the Student Assistant.” — Dr. Loretta S. Smith, Professor of Management, Arkansas Tech University
The Student Assistant uses only Cengage-authored course materials — it does not search the web.
By leveraging content aligned directly with instructor’s chosen textbook , the Student Assistant provides reliable, real-time guidance that helps students bridge knowledge gaps — without ever relying on external sources that may lack credibility.
Unlike tools that rely on potentially unreliable web sources, the Student Assistant ensures that every piece of guidance aligns with course objectives and instructor expectations.
Here’s how:
By staying within our ecosystem, the Student Assistant fosters academic integrity and ensures students are empowered to learn with autonomy and confidence.
“The Student Assistant is user friendly and adaptive. The bot responded appropriately and in ways that prompt students to deepen their understanding without giving away the answer.” – Lois Mcwhorter, Department Chair for the Hutton School of Business at the University of Cumberlands
56% of faculty cited personalization as a top use case for GenAI to help enhance the learning experience.
The Student Assistant enhances student outcomes by offering a personalized educational experience. It provides students with tailored resources that meet their unique learning needs right when they need them. With personalized, encouraging feedback and opportunities to connect with key concepts in new ways, students gain a deeper understanding of their coursework. This helps them close learning gaps independently and find the answers on their own, empowering them to take ownership of their education.
“What surprised me most about using the Student Assistant was how quickly it adapted and adjusted to feedback. While the Student Assistant helped support students with their specific questions or tasks, it did so in a way that allowed for a connection. It was not simply a bot that pointed you to the correct answer in the textbook; it assisted students similar to how a professor or instructor would help a student.” — Dr. Stephanie Thacker, Associate Professor of Business for the Hutton School of Business at the University of the Cumberlands
The Student Assistant is available 24/7 to help students practice concepts without the need to wait for feedback, enabling independent learning before seeking instructor support.
With just-in-time feedback, students can receive guidance tailored to their course, helping them work through challenges on their own schedule. By guiding students to discover answers on their own, rather than providing them outright, the Student Assistant encourages critical thinking and deeper engagement.
“Often students will come to me because they are confused, but they don’t necessarily know what they are confused about. I have been incredibly impressed with the Student Assistants’ ability to help guide students to better understand where they are struggling. This will not only benefit the student but has the potential to help me be a better teacher, enable more critical thinking and foster more engaging classroom discussion.” — Professor Noreen Templin, Department Chair and Professor of Economics at Butler Community College
The Student Assistant, embedded in MindTap, is available in beta with select titles , such as “Management,” “Human Psychology” and “Principles of Economics” — with even more coming this fall. Find the full list of titles that currently feature the Student Assistant, plus learn more about the tool and AI at Cengage right here.

Want to build an AI tool that will seriously impact your digital learning program? Right now? For a price that you may well have in your professional development budget?
I’m launching a project to prove we can build a tool that will change the economics of learning design and curricular materials in months rather than years. Its total cost will be low enough to be paid for by workshop participation fees.
Join me.
Many of my friends running digital course design teams tell me they cannot keep up with demand. Whether their teams are large or small, centralized or instructor-led, higher education or corporate learning and development (L&D), the problem is the same; several friends at large shops have told me that their development of new courses and redesigns of old ones have all but ground to a halt. They don’t have time or money to fix the problem.
I’ve been asking, “Suppose we could accelerate your time to develop a course by, say, 20%?” Twenty percent is my rough, low-end guess about the gains. We should be able to get at least that much benefit without venturing into the more complex and riskier aspects of AI development. “Would a 20% efficiency gain be significant?” I ask.
Answer: “It would be huge.”
My friends tend to cite a few benefits:
The learning design bottleneck is a major business problem for many organizations. Relatively modest productivity gains would make a substantial difference for them. Generative AI seems like a good tool for addressing this problem. How hard and expensive would it be to build a tool that, on average, delivers a 20% gain in productivity?
Every LMS vendor, courseware platform provider, curricular materials vendor, and OPM provider is currently working on tools like this. I have talked to a handful of them. They all tell me it’s not hard—depending on your goals. Vendors have two critical constraints. First, the market is highly suspicious of black-box vendor AI and very sensitive to AI products that make mistakes. EdTech companies can’t approach the work as an experiment. Second, they must design their AI features to fit their existing business goals. Every feature competes with other priorities that their clients are asking for.
The project I am launching—AI Learning Design Assistant (ALDA)—is different. First, it’s design/build. The participants will drive the requirements for the software. Second, as I will spell out below, our software development techniques will be relatively simple and easy to understand. In fact, the value of ALDA is as much in learning patterns to build reliable, practical, AI-driven tools as it is in the product itself. And third, the project is safe.
ALDA is intended to produce a first draft for learning designers. No students need to see content that has not been reviewed by a human expert or interact directly with the AI at all. The process by which ALDA produces its draft will be transparent and easy to understand. The output will be editable and importable into the organization’s learning platform of choice.
Here’s how we’ll do it:
The consistent response from everyone in EdTech I’ve talked to who is doing this kind of work is that we can achieve ALDA’s performance goals with these techniques. If we were trying to get 80% or 90% accuracy, that would be different. But a 20% efficiency gain with an expert human reviewing the output? That should be very much within reach. The main constraints on the ALDA project are time and money. Those are deliberate. Constraints drive focus.
Let’s build something useful. Now.
Teams that want to participate in the workshop will have to apply. I’m recruiting teams that have immediate needs to build content and are willing to contribute their expertise to making ALDA better. There will be no messing around. Participants will be there to build something. For that reason, I’m quite flexible about who is on your team or how many participate. One person is too few, and eight is probably too many. My main criterion is that the people you bring are important to the ALDA-related project you will be working on.
This is critical because we will be designing ALDA together based on the experience and feedback from you and the other participants. In advance of the first workshop, my colleagues and I will review any learning design protocol documentation you care to share and conduct light interviews. Based on that information, you will have access to the first working iteration of ALDA at the first workshop. For this reason, the workshop series will start in the spring. While ALDA isn’t going to require a flux capacitor to work, it will take some know-how and effort to set up.
The workshop cohort will meet virtually once a month after that. Teams will be expected to have used ALDA and come up with feedback and suggestions. I will maintain a rubric for teams to use based on the goals and priorities for the tool as we develop them together. I will take your input to decide which features will be developed in the next iteration. I want each team to finish the workshop series with the conviction that ALDA can achieve those performance gains for some important subset of their course design needs.
Anyone who has been to one of my Empirical Educator Project (EEP) or Blursday Social events knows that I believe that networking and collaboration are undervalued at most events. At each ALDA workshop, you will have time and opportunities to meet with and work with each other. I’d love to have large universities, small colleges, corporate L&D departments, non-profits, and even groups of students participating. I may accept EdTech vendors if and only if they have more to contribute to the group effort than just money. Ideally, the ALDA project will lead to new collaborations, partnerships, and even friendships.
The workshop also helps us learn together about how to teach AI about teaching and learning. AI research is showing us how much better the technology can be when it’s trained on good data. There is so much bad pedagogy on the internet. And the content that is good is not marked up in a way that is friendly to teach AI patterns. What does a good learning objective or competency look like? How do you write hints or assessment feedback that helps students learn but doesn’t give away the answers? How do you create alignment among the components of a learning design?
The examples we will be using to teach the AI have not only been fine-tuned for effectiveness using machine learning over many years; they are also semantically coded to capture some of these nuances. These are details that even many course designers haven’t mastered.
I see a lot of folks rushing to build “robot tutors in the sky 2.0” without a lot of care to make sure the machines see what we see as educators. They put a lot of faith in data science but aren’t capturing the right data because they’re ignoring decades of learning science. The ALDA project will teach us how to teach the machines about pedagogy. We will learn to identify the data structures that will empower the next generation of AI-powered learning apps. And we will do that by becoming better teachers of ALDA using the tools of good teaching: clear goals, good instructions, good examples, and good assessments. Much of it will be in plain English, and the rest will be in a simple software markup language that any computer science undergraduate will know.
The cost for the workshop series, including all source code and artifacts, is $25,000 for your team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.
You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:
[Update: I’m hearing from a couple of you that your messages to me through the form above are getting caught in the spam filter. Feel free to email me at [email protected] if the form isn’t getting through.]
I hope you’ll join us.