Tag: agentic

  • Transitioning to the Agentic University 2026–27

    Transitioning to the Agentic University 2026–27

    The dramatic enhancement in the capability of AI as it moves from bots to agents will bring about efficiencies and have a far greater impact on the day-to-day operations, strategies and effectiveness of our institutions. We will become less expensive, more personalized and more responsive to students and employers. Those are big claims, so for this column, I turned to my personal assistant, Google Gemini 2.5 Pro on Nov. 1, 2025, to help me with identifying the pathway to those outcomes.

    Caveats

    In this article, I can only hit the high points along the way in developing an agentic AI–enhanced university. This is merely a broad-brush, incomplete view that each university should adjust and enhance in order to meet its structures and needs. I also encourage you to read the valuable McKinsey article “Deploying agentic AI with safety and security: A playbook for technology leaders,” published on Oct. 15, 2025, for important security and safety information.

    Gemini Pro prefaced its recommendations with:

    This is the central strategic challenge for the next 36 months. The institution has successfully navigated the first wave—generative AI, which is about creating content. You are right to be concerned about the second, more powerful wave: agentic AI, which is about taking action.

    Where generative AI is a research assistant, an agentic AI is a 24/7, autonomous project manager. It can understand a high-level goal (e.g., “increase retention for first-generation students”), create a multi-step plan, execute that plan across different software systems and learn from its mistakes without constant human prompting.

    Competitors will use this to create profound operational efficiencies and new, hyper-personalized student services. The institutions that wait will be competing on price and quality against a rival that has automated its back office and given every student a personal AI adviser.

    This will take significant thought and planning across the university. It will be important that we include all members of the university community to make this a coordinated, comprehensive change that will significantly advance the efficiency, effectiveness and relevance of the institution. Timing is important. We must begin immediately if we hope to have major changes in place before the end of 2027. Let’s begin!

    First Half of 2026 Foundation and Vision

    We will need an executive task force with the knowledge, resources and shared vision to accomplish this task. Gemini recommends we be sure to include:

    • Chief information officer: To map the data and systems.
    • Chief financial officer: To identify cost-saving opportunities and fund pilots.
    • Provost: To champion the academic quality and student-facing initiatives.
    • VP of enrollment: To represent the entire student life cycle (recruitment to alumni).
    • VP of operations: To represent the “back office” (HR, grounds, facilities).

    The executive task force will want to set up opportunities for input and support of the initiative. Perhaps the first step will be to seek ideas of whether the first order of priority should be quality improvement (hyperpersonalization of services to the learners) or cost efficiency (operational excellence). Both of these will be needed in the long run in order to survive the agent-enabled competition that will be both of higher quality and less expensive. In seeking input on this choice, universitywide awareness can be fostered. Perhaps a broad university forum could be scheduled on the topic with smaller, targeted follow-ups with faculty, staff, students, administrators and external stakeholder groups scheduled as the initiative proceeds.

    One of the first steps of the executive task force will be to perform a universitywide Agent Readiness Audit. Since agents run on data and processes, we need to identify any data silos and process bottlenecks. These will be among our first priorities to ensure that agents can perform work smoothly and efficiently. Resolving these may also be among the most time-consuming changes. However, removing these data roadblocks can begin to show immediate progress in responsiveness and efficiency.

    Second Half of 2026 Into Spring 2027 Pilot and Infrastructure

    Gemini suggests that a good starting point in the summer of 2026 would be to set up two pilots:

    • Cost-Saving Pilot: The Facilities Agent
    • Goal: Reduce energy and maintenance costs.
    • Action: An AI agent integrates with the campus event schedule, weather forecasts and the building HVAC/lighting systems. It autonomously adjusts climate control and lighting for actual use, not just a fixed timer. It also fields all maintenance requests, triages them and dispatches staff or robotic mowers/vacuums automatically.
    • Quality-Improvement Pilot Example: The Proactive Adviser Agent
    • Goal: Improve retention for at-risk students.
    • Action: An agent monitors student data in real time (LMS engagement, attendance, early grade-book data). It doesn’t replace the human adviser. It acts as their assistant, flagging a student who is at risk before the midterm and autonomously executing a plan: sending a nudge, offering to schedule a tutoring session and summarizing the risk profile for the human adviser to review.

    Our most significant centralized expense will be to set up a secure digital sandbox. The pilots cannot live on a faculty member’s laptop. The CIO must lead the creation of a central, secure platform. This sandbox is a secure environment where AI agents can be developed, tested and given access to the university’s core data APIs (e.g., SIS, LMS and ERP).

    Gemini reminds me that, concurrently, we must set up a new entity. The generative AI rules were about plagiarism. The agentic AI rules must be about liability. The new entity is a kind of Agent Accountability Framework. It deals with policy questions such as:

    • Who is responsible when an agent gives a student incorrect financial aid advice?
    • What is the off-switch when an agent-driven workflow (like course wait lists) creates an inequitable outcome? Who has authority to flip the switch?
    • By whom and how are an agent’s actions audited?

    Implementation Across University Through Fall 2027

    There will be many personnel and staffing topics to address. By the summer of 2027, we should be well on the way to refining roles and position descriptions of employees. The emphasis should be efficient, enhanced redesign of roles rather than staffing cuts. Some cuts will come from normal turnover as staff find more attractive opportunities or retire. In most cases, employees will become much more productive, handing off their redundant, lower-level work to agents. For example, Gemini Pro envisions:

    • The admissions counselor who used to answer 500 identical emails now manages a team of AI agents that handle the routine questions, freeing the counselor to spend one-on-one time with high-priority applicants.
    • The IT help desk technician no longer resets passwords. The technicians now train the AI agent on how to troubleshoot new software and directly handle only the most complex, level-three issues.
    • The human adviser now manages a caseload of 500 students (not 150), because the AI assistant handles 90 percent of the administrative churn, allowing the adviser to focus on high-impact mentoring.

    Gemini Pro suggests that this approach can result in a higher-quality, more efficient university that will be able to compete in the years ahead. The final step is the most critical and is the job of everyone, from the president and board on down. We must champion a culture where AI agents are seen as collaborators, not replacements. This is a human-AI “co-bot” workforce.

    The institutions that win in 2027 will be those that successfully trained their managers to lead mixed teams of human and AI employees. This is the single greatest competitive advantage one can build.

    This framework will position the university not just to survive the agentic AI wave but to lead it, creating an institution that is both more efficient and, critically, more human-centered.

    Source link

  • Agentic AI Invading the LMS and Other Things We Should Know

    Agentic AI Invading the LMS and Other Things We Should Know

    Over the past 18 months, I’ve been spending the majority of my time writing and speaking about how I think we can and should continue to teach writing even as we have this technology that is capable of generating synthetic text. While my values regarding this issue are unshakable, the world undeniably changes around me, which requires an ongoing vigilance regarding the capabilities of this technology.

    But like most people, I don’t have unlimited time to stay on top of these things. One of my recommendations in More Than Words for navigating these challenges is to “find your guides,” the people who are keeping an eye on aspects of the issue that you can trust.

    One of my guides for the entirety of this period is Marc Watkins, someone who is engaged with staying on top of the latest implications of how the technology and the way students are using it is evolving.

    I thought it might be helpful to others to share the questions I wanted to ask Marc for my own edification.

    Marc Watkins directs the AI Institute for Teachers and is an assistant director of academic innovation at the University of Mississippi, where he is a lecturer in writing and rhetoric. When training faculty in applied artificial intelligence, he believes educators should be equally supported if they choose to work with AI or include friction to curb AI’s influence on student learning. He regularly writes about AI and education on his Substack, Rhetorica.

    Q: One of the things I most appreciate about the work you’re doing in thinking about the intersection of education and generative AI is that you actively engage with the technology using a lens to ask what a particular tool may mean for students and classes. I appreciate it because my personal interest in using these things beyond keeping sufficiently, generally familiar is limited, and I know that we share similar values at the core of the work of reading and writing. So, my first question is for those of us who aren’t putting these things through their paces: What’s the state of things? What do you think instructors should, specifically, know about the capacities of gen AI tools?

    A: Thanks, John! I think we’re of the same mind when it comes to values and AI. By that, I mean we both see human agency and will as key moving forward in education and in society. Part of my life right now is talking to lots of different groups about AI updates. I visit with faculty, administration, researchers, even quite a few folks outside of academia. It’s exhausting just to keep up and nearly impossible to take stock.

    We now have agentic AI that completes tasks using your computer for you; multimodal AI that can see and interact with you using a computer voice; machine reasoning models that take simple prompts and run them in loops repeatedly to guess what a sophisticated response might look like; browser-based AI that can scan any webpage and perform tasks for you. I’m not sure students are aware of any of what AI can do beyond interfaces like ChatGPT. The best thing any instructor can do is have a conversation with students to ask them if they are using AI and gauge how it is impacting their learning.

    Q: I want to dig into the AI “agents” a bit more. You had a recent post on this, as did Anna Mills, and I think it’s important for folks to know that these companies are purposefully developing and selling technology that can go into a Canvas course and start doing “work.” What are we to make of this in terms of how we think about designing courses?

    A: I think online assessment is generally broken at this point and won’t be saved. But online learning still has a chance and is something we should fight for. For all of its many flaws, online education has given people a valid pathway to a version of college education that they might not have been able to afford otherwise. There’s too many issues with equity and access to completely remove online from higher education, but that doesn’t mean we cannot radically think what it means to learn in online spaces. For instance, you can assign your students a process notebook in an online course that involves them writing by hand with pen and paper, then take a photograph or scan it and upload it. The [optical character recognition] function within many of the foundation models will be able to transcribe most handwriting into legible text. We can and should look for ways to give our students embodied experiences within disembodied spaces.

    Q: In her newsletter, Anna Mills calls on AI companies to collaborate on keeping students from deploying these agents in service of doing all their work for them. I’m skeptical that there’s any chance of this happening. I see an industry that seems happy to steamroll instructors, institutions and even students. Am I too cynical? Is there space for collaboration?

    A: There’s space for collaboration for sure, and limiting some of the more egregious use cases, but we also have to be realistic about what’s happening here. AI developers are moving fast and breaking things with each deployment or update, and we should be deeply skeptical when they come around to offer to sweep up the pieces, lest we forget how they became broken in the first place.

    Q: I’m curious if the development of the technology tracks what you would have figured a year or even longer, 18 months ago. How fast do you think this stuff is moving in terms of its capacities as they relate to school and learning? What do you see on the horizon?

    A: The problem we’re seeing is one of uncritical adoption, hype and acceleration. AI labs create a new feature or use case and deploy it within a few days for free or low cost, and industry has suddenly adopted this technique to bring the latest up-to-date AI features to enterprise products. What this means is the none-AI applications we’ve used for years suddenly get AI integrated into it, or if it has an AI feature, sees it rapidly updated.

    Most of these AI updates aren’t tested enough to be trusted outside of human in the loop assistance. Doing otherwise makes us all beta testers. It’s creating “work slop,” where companies are seeing employees using AI uncritically to often save time and produce error-laden work that then takes time and resources to address. Compounding things even more, it increasingly looks like the venture capital feeding AI development is one of the prime reasons our economy isn’t slipping into recession. Students and faculty find themselves at ground zero for most of this, as education looks like one of the major industries being impacted by AI.

    Q: One of the questions I often get when I’m working with faculty on campuses is what I think AI “literacy” looks like, and while I have my share of thoughts, I tend to pivot back to my core message, which is that I’m more worried about helping students develop their human capacities than teaching them how to work with AI. But let me ask you, what does AI literacy look like?

    A: I think AI literacy really isn’t about using AI. For me, I define AI literacy as learning how the technology works and understanding its impact on society. Using that definition, I think we can and should integrate aspects of AI literacy throughout our teaching. The working-with-AI-responsibly part, what I’d call AI fluency, has its place in certain classes and disciplines but needs to go hand in hand with AI literacy; otherwise, you risk uncritically adopting a technology with little understanding or demystifying AI and helping students understand its impact on our world.

    Q: Whenever I make a campus visit, I try to have a chance to talk to students about their AI use, and for the most part I see a lot of critical thinking about it, where students recognize many of the risks of outsourcing all of their work, but also share that within the system they’re operating in, it sometimes makes sense to use it. This has made me think that ultimately, our only response can be to treat the demand side of the equation. We’re not going to be able to police this stuff. The tech companies aren’t going to help. It’s on the students to make the choices that are most beneficial to their own lives. Of course, this has always been the case with our growth and development. What do you think we should be focused on in managing these challenges?

    A: My current thinking is we should teach students discernment when it comes to AI tools and likely ourselves, too. There’s no rule book or priors for us to call upon when we deal with a machine that mimics human intelligence. My approach is radical honesty with students and faculty. By that I mean the following: I cannot police your behavior here and no one else is going to do that, either. It is up to all of us to form a social contract and find common agreement about where this technology belongs in our lives and create clear boundaries where it does not.

    Source link

  • Why agentic AI matters now more than ever

    Why agentic AI matters now more than ever

    Key points:

    For years now, the promise of AI in education has centered around efficiency–grading faster, recommending better content, or predicting where a student might struggle.

    But at a moment when learners face disconnection, systems are strained, and expectations for personalization are growing, task automation feels…insufficient.

    What if we started thinking less about what AI can do and more about how it can relate?

    That’s where agentic AI comes in. These systems don’t just answer questions. They recognize emotion, learn from context, and respond in ways that feel more thoughtful than transactional. Less machine, more mentor.

    So, what’s the problem with what we have now?

    It’s not that existing AI tools are bad. They’re just incomplete.

    Here’s where traditional AI systems tend to fall short:

    • NLP fine-tuning
       Improves the form of communication but doesn’t understand intent or depth.
    • Feedback loops
       Built to correct errors, not guide growth.
    • Static knowledge bases
       Easy to search but often outdated or contextually off.
    • Ethics and accessibility policies
       Written down but rarely embedded in daily workflows.
    • Multilingual expansion
       Translates words, not nuance or meaning across cultures.

    These systems might help learners stay afloat. They don’t help them go deeper.

    What would a more intelligent system look like?

    It wouldn’t just deliver facts or correct mistakes. A truly intelligent learning system would:

    • Understand when a student is confused or disengaged
    • Ask guiding questions instead of giving quick answers
    • Retrieve current, relevant knowledge instead of relying on a static script
    • Honor a learner’s pace, background, and context
    • Operate with ethical boundaries and accessibility in mind–not as an add-on, but as a foundation

    In short, it would feel less like a tool and more like a companion. That may sound idealistic, but maybe idealism is what we need.

    The tools that might get us there

    There’s no shortage of frameworks being built right now–some for developers, others for educators and designers. They’re not perfect. But they’re good places to start.

    Framework Type Use
    LangChain Code Modular agent workflows, RAG pipelines
    Auto-GPT Code Task execution with memory and recursion
    CrewAI Code Multi-agent orchestration
    Spade Code Agent messaging and task scheduling
    Zapier + OpenAI No-code Automated workflows with language models
    Flowise AI No-code Visual builder for agent chains
    Power Automate AI Low-code AI in business process automation
    Bubble + OpenAI No-code Build custom web apps with LLMs

    These tools are modular, experimental, and still evolving. But they open a door to building systems that learn and adjust–without needing a PhD in AI to use them.

    A better system starts with a better architecture

    Here’s one way to think about an intelligent system’s structure:

    Learning experience layer

    • Where students interact, ask questions, get feedback
    • Ideally supports multilingual input, emotional cues, and accessible design

    Agentic AI core

    • The “thinking” layer that plans, remembers, retrieves, and reasons
    • Coordinates multiple agents (e.g., retrieval, planning, feedback, sentiment)

    Enterprise systems layer

    • Connects with existing infrastructure: SIS, LMS, content repositories, analytics systems

    This isn’t futuristic. It’s already possible to prototype parts of this model with today’s tools, especially in contained or pilot environments.

    So, what would it actually do for people?

    For students:

    • Offer guidance in moments of uncertainty
    • Help pace learning, not just accelerate it
    • Present relevant content, not just more content

    For teachers:

    • Offer insight into where learners are emotionally and cognitively
    • Surface patterns or blind spots without extra grading load

    For administrators:

    • Enable guardrails around AI behavior
    • Support personalization at scale without losing oversight

    None of this replaces people. It just gives them better support systems.

    Final thoughts: Less control panel, more compass

    There’s something timely about rethinking what we mean by intelligence in our learning systems.

    It’s not just about logic or retrieval speed. It’s about how systems make learners feel–and whether those systems help learners grow, question, and persist.

    Agentic AI is one way to design with those goals in mind. It’s not the only way. But it’s a start.

    And right now, a thoughtful start might be exactly what we need.

    Latest posts by eSchool Media Contributors (see all)

    Source link