Tag: agentic

  • The Rise of the Agentic AI University in 2026

    The Rise of the Agentic AI University in 2026

    In a very active and highly competitive environment, AI has grown at breakneck speed. As with so many technologies, business and industry have moved far faster than academe to embrace the cost savings, capability expanding and wholly innovative aspects of AI. Fraught with our own industry-specific challenges such as enrollment downturns, sharp drops in perceived value, the striking “math cliff” in higher ed and a rapidly changing regulatory policy shift in state and federal administration, our field has been cast into a sea of pressing priorities for changes.

    This year is likely to be the one where we begin to implement institutionwide AI-powered solutions to help us move forward with agility and effectiveness in adapting to the changing environment. As Aviva Legatt writes in Forbes’ “7 Decisions That Will Define AI in Higher Education in 2026”,

    “Over the past year, the shift from AI as a tool to AI as institutional infrastructure has become unmistakable. Students have already integrated AI into daily academic workflows, vendors are pushing enterprise deployments, federal and accreditation expectations are rising and labor-market volatility is forcing colleges to rethink how learning connects to opportunity. At the same time, agentic AI is moving from experimentation to execution, reshaping how advising, enrollment, learning support and operations can be delivered. In 2026, these threads converge: institutions that operationalize AI will widen their performance gap, while those that don’t will inherit a shadow system they can’t control.”

    Yet, where these changes will take place within the field, how these changes will impact our higher education workforce and the extent to which we can change in time to meet our market demand by producing knowledgeable and skilled employees for the economy at large remains in question. For those of us in early and midcareer positions, pressing questions arise: “Will I still have a job? How will my position description change? Will I be prepared? What should I do now to ensure I remain a valuable asset to my university?” It is my purpose in this brief column to identify some of the areas in which changes seem most likely to take place in this new year.

    To date, we have made significant progress in developing chatbot-hosted, transactional generative AI in which the user inputs questions and answers to the bot. One of the many high-quality examples is the Khan Academy’s Khanmigo. These have been effective in hosting tutors, study apps, curricular design and much more.

    The use of generative AI continues to expand in new ways. Meanwhile, the development of AI agents is driving the expansion and efficiency of AI. In the agentic AI models, we have tools that are capable of reasoned assessment of what is needed to accomplish a goal, aligning a series of stacked tasks and completing those tasks without direct supervision in an efficient way, much like a human assistant would perform a series of tasks to achieve desired outcomes. For example, this often includes data collection, analysis of the data, identifying and implementing ways in which to accomplish the goals, documenting the findings, and finding better ways to accomplish the outcomes.

    This opens the possibility that portions of individual position descriptions can be offloaded from humans and integrated into agentic AI duties. This results in fewer overall employees; lower indirect costs such as insurance, vacation and sick leave; and a more cost-efficient operation. Beginning now, institutions are moving from scattered pilots to governed, agentic workflows that will define the next decade of ensuring student success and operational efficiency.

    I asked my virtual digital assistant, Gemini 3 Deep Research, on Dec. 28 to suggest some of the implementations we will most likely see broadly implemented to address the student lifecycle. Gemini suggested that the work will be “personalized, proactive and persistent.” Gemini 3 Thinking mode predicted we will see a wide range of implementations in 2026, including:

    1. The 24/7 Digital Concierge (Recruitment): Beyond simple FAQs, agents now manage the entire “nurturing funnel,” handling complex credit transfer evaluations and scheduling campus tours via multichannel SMS and web interfaces. Source: 2026 Higher Education Digital Marketing Trends (EducationDynamics)
    2. Socratic Tutors for Every Learner: AI tutors that don’t just give answers but engage in Socratic dialogue, scaffolding difficult concepts and generating infinite practice problems based on real-time course performance. Source: AI Tutors and the Human Data Workforce 2026 Guide (HeroHunt)
    3. Mental Health First Responders: AI agents serving as low-barrier triage points, offering immediate coping strategies for anxiety and seamlessly escalating high-risk cases to human counselors. Source: How AI Chatbots Are Transforming Student Services (Boundless Learning)
    4. Predictive Intervention for Gatekeeper Courses: Using “behavioral trace data” from LMS platforms to identify students struggling in high-risk introductory courses (e.g., College Algebra, Gen Chem) before the first midterm. Source: Predictive Analytics in Higher Ed: Promises and Challenges (AIR)
    5. Admissions Document Verification Agents: Autonomous systems that verify international credentials, flag missing forms and check for eligibility in milliseconds, reducing the time to decision from weeks to minutes. Source: AI Agents for Universities: Automating Admissions (Supervity)

    Gemini 3 Thinking mode continued with examples of back-office efficiencies that AI will provide to universities that are early adopters of an agentic AI approach:

    1. Automated University Accounting: AI agents that handle invoice processing, general ledger coding and “smart” expense management, ensuring policy compliance without manual entry. Source: 5 Use Cases for AI Agents in Finance (Centric Consulting)
    2. Grant Management and Writing Assistants: Agents that scan federal databases (Grants.gov) to match faculty research with funding, draft initial narratives and manage postaward financial reporting. Source: AI Grant Management: Driving Efficiency (Fluxx AI)
    3. Dynamic Enrollment Marketing Agents: “Search everywhere optimization” (GEO/AEO) tools that ensure the university appears in AI-generated best-of lists and voice-search results on platforms like TikTok and Reddit. Source: Transitioning to the Agentic University 2026–27 (UPCEA)
    4. Procurement and Spend Analysis: Agents that continuously monitor contract compliance and supplier health, identifying hidden savings that can be reallocated to student scholarships. Source: How AI Agents Change Procurement Work in 2026 (Suplari)
    5. Regulatory Reporting and Audit Agents: Systems that autogenerate audit-ready reports for state and federal compliance, reducing the administrative burden on institutional research offices. Source: FINRA 2026 Oversight Report: The Reckoning for Autonomous AI (Snell & Wilmer)
    6. HR and Benefits Support: 24/7 staff-facing agents that answer complex questions about leave policies, payroll and benefits, freeing HR staff for strategic culture-building work. Source: Agentic AI: Top Tech Trend of 2025/2026 (Gartner/EAB)
    7. The “AI-First” Curriculum Redesign: Moving beyond academic integrity to “AI fluency” as a graduation standard, where agents help faculty redesign assessments to focus on process rather than product. Source: 2026 Predictions for AI in Higher Education (Packback)

    Of course, there will be many comparable efficiencies implemented in other areas of universities. These are examples that demonstrate the cost and time efficiencies that can be realized through thoughtful implementation of agentic AI. In the Nov. 12 issue of this column, “Transitioning to the Agentic University 2026–27,” I detail an approach to begin the administrative agentic AI transition.

    Although there is less mention publicly about direct instruction by AI, this is inevitable in coming years. Most likely AI-led instruction will begin in noncredit offerings, but ultimately no teaching task will be out of reach. It will come at a significantly lower cost, greater personalization and instant updating with every new development in the field as it happens.  How can we best prepare our colleagues in higher education for the changes that are coming this year and each successive year?

    Source link

  • Transitioning to the Agentic University 2026–27

    Transitioning to the Agentic University 2026–27

    The dramatic enhancement in the capability of AI as it moves from bots to agents will bring about efficiencies and have a far greater impact on the day-to-day operations, strategies and effectiveness of our institutions. We will become less expensive, more personalized and more responsive to students and employers. Those are big claims, so for this column, I turned to my personal assistant, Google Gemini 2.5 Pro on Nov. 1, 2025, to help me with identifying the pathway to those outcomes.

    Caveats

    In this article, I can only hit the high points along the way in developing an agentic AI–enhanced university. This is merely a broad-brush, incomplete view that each university should adjust and enhance in order to meet its structures and needs. I also encourage you to read the valuable McKinsey article “Deploying agentic AI with safety and security: A playbook for technology leaders,” published on Oct. 15, 2025, for important security and safety information.

    Gemini Pro prefaced its recommendations with:

    This is the central strategic challenge for the next 36 months. The institution has successfully navigated the first wave—generative AI, which is about creating content. You are right to be concerned about the second, more powerful wave: agentic AI, which is about taking action.

    Where generative AI is a research assistant, an agentic AI is a 24/7, autonomous project manager. It can understand a high-level goal (e.g., “increase retention for first-generation students”), create a multi-step plan, execute that plan across different software systems and learn from its mistakes without constant human prompting.

    Competitors will use this to create profound operational efficiencies and new, hyper-personalized student services. The institutions that wait will be competing on price and quality against a rival that has automated its back office and given every student a personal AI adviser.

    This will take significant thought and planning across the university. It will be important that we include all members of the university community to make this a coordinated, comprehensive change that will significantly advance the efficiency, effectiveness and relevance of the institution. Timing is important. We must begin immediately if we hope to have major changes in place before the end of 2027. Let’s begin!

    First Half of 2026 Foundation and Vision

    We will need an executive task force with the knowledge, resources and shared vision to accomplish this task. Gemini recommends we be sure to include:

    • Chief information officer: To map the data and systems.
    • Chief financial officer: To identify cost-saving opportunities and fund pilots.
    • Provost: To champion the academic quality and student-facing initiatives.
    • VP of enrollment: To represent the entire student life cycle (recruitment to alumni).
    • VP of operations: To represent the “back office” (HR, grounds, facilities).

    The executive task force will want to set up opportunities for input and support of the initiative. Perhaps the first step will be to seek ideas of whether the first order of priority should be quality improvement (hyperpersonalization of services to the learners) or cost efficiency (operational excellence). Both of these will be needed in the long run in order to survive the agent-enabled competition that will be both of higher quality and less expensive. In seeking input on this choice, universitywide awareness can be fostered. Perhaps a broad university forum could be scheduled on the topic with smaller, targeted follow-ups with faculty, staff, students, administrators and external stakeholder groups scheduled as the initiative proceeds.

    One of the first steps of the executive task force will be to perform a universitywide Agent Readiness Audit. Since agents run on data and processes, we need to identify any data silos and process bottlenecks. These will be among our first priorities to ensure that agents can perform work smoothly and efficiently. Resolving these may also be among the most time-consuming changes. However, removing these data roadblocks can begin to show immediate progress in responsiveness and efficiency.

    Second Half of 2026 Into Spring 2027 Pilot and Infrastructure

    Gemini suggests that a good starting point in the summer of 2026 would be to set up two pilots:

    • Cost-Saving Pilot: The Facilities Agent
    • Goal: Reduce energy and maintenance costs.
    • Action: An AI agent integrates with the campus event schedule, weather forecasts and the building HVAC/lighting systems. It autonomously adjusts climate control and lighting for actual use, not just a fixed timer. It also fields all maintenance requests, triages them and dispatches staff or robotic mowers/vacuums automatically.
    • Quality-Improvement Pilot Example: The Proactive Adviser Agent
    • Goal: Improve retention for at-risk students.
    • Action: An agent monitors student data in real time (LMS engagement, attendance, early grade-book data). It doesn’t replace the human adviser. It acts as their assistant, flagging a student who is at risk before the midterm and autonomously executing a plan: sending a nudge, offering to schedule a tutoring session and summarizing the risk profile for the human adviser to review.

    Our most significant centralized expense will be to set up a secure digital sandbox. The pilots cannot live on a faculty member’s laptop. The CIO must lead the creation of a central, secure platform. This sandbox is a secure environment where AI agents can be developed, tested and given access to the university’s core data APIs (e.g., SIS, LMS and ERP).

    Gemini reminds me that, concurrently, we must set up a new entity. The generative AI rules were about plagiarism. The agentic AI rules must be about liability. The new entity is a kind of Agent Accountability Framework. It deals with policy questions such as:

    • Who is responsible when an agent gives a student incorrect financial aid advice?
    • What is the off-switch when an agent-driven workflow (like course wait lists) creates an inequitable outcome? Who has authority to flip the switch?
    • By whom and how are an agent’s actions audited?

    Implementation Across University Through Fall 2027

    There will be many personnel and staffing topics to address. By the summer of 2027, we should be well on the way to refining roles and position descriptions of employees. The emphasis should be efficient, enhanced redesign of roles rather than staffing cuts. Some cuts will come from normal turnover as staff find more attractive opportunities or retire. In most cases, employees will become much more productive, handing off their redundant, lower-level work to agents. For example, Gemini Pro envisions:

    • The admissions counselor who used to answer 500 identical emails now manages a team of AI agents that handle the routine questions, freeing the counselor to spend one-on-one time with high-priority applicants.
    • The IT help desk technician no longer resets passwords. The technicians now train the AI agent on how to troubleshoot new software and directly handle only the most complex, level-three issues.
    • The human adviser now manages a caseload of 500 students (not 150), because the AI assistant handles 90 percent of the administrative churn, allowing the adviser to focus on high-impact mentoring.

    Gemini Pro suggests that this approach can result in a higher-quality, more efficient university that will be able to compete in the years ahead. The final step is the most critical and is the job of everyone, from the president and board on down. We must champion a culture where AI agents are seen as collaborators, not replacements. This is a human-AI “co-bot” workforce.

    The institutions that win in 2027 will be those that successfully trained their managers to lead mixed teams of human and AI employees. This is the single greatest competitive advantage one can build.

    This framework will position the university not just to survive the agentic AI wave but to lead it, creating an institution that is both more efficient and, critically, more human-centered.

    Source link

  • Agentic AI Invading the LMS and Other Things We Should Know

    Agentic AI Invading the LMS and Other Things We Should Know

    Over the past 18 months, I’ve been spending the majority of my time writing and speaking about how I think we can and should continue to teach writing even as we have this technology that is capable of generating synthetic text. While my values regarding this issue are unshakable, the world undeniably changes around me, which requires an ongoing vigilance regarding the capabilities of this technology.

    But like most people, I don’t have unlimited time to stay on top of these things. One of my recommendations in More Than Words for navigating these challenges is to “find your guides,” the people who are keeping an eye on aspects of the issue that you can trust.

    One of my guides for the entirety of this period is Marc Watkins, someone who is engaged with staying on top of the latest implications of how the technology and the way students are using it is evolving.

    I thought it might be helpful to others to share the questions I wanted to ask Marc for my own edification.

    Marc Watkins directs the AI Institute for Teachers and is an assistant director of academic innovation at the University of Mississippi, where he is a lecturer in writing and rhetoric. When training faculty in applied artificial intelligence, he believes educators should be equally supported if they choose to work with AI or include friction to curb AI’s influence on student learning. He regularly writes about AI and education on his Substack, Rhetorica.

    Q: One of the things I most appreciate about the work you’re doing in thinking about the intersection of education and generative AI is that you actively engage with the technology using a lens to ask what a particular tool may mean for students and classes. I appreciate it because my personal interest in using these things beyond keeping sufficiently, generally familiar is limited, and I know that we share similar values at the core of the work of reading and writing. So, my first question is for those of us who aren’t putting these things through their paces: What’s the state of things? What do you think instructors should, specifically, know about the capacities of gen AI tools?

    A: Thanks, John! I think we’re of the same mind when it comes to values and AI. By that, I mean we both see human agency and will as key moving forward in education and in society. Part of my life right now is talking to lots of different groups about AI updates. I visit with faculty, administration, researchers, even quite a few folks outside of academia. It’s exhausting just to keep up and nearly impossible to take stock.

    We now have agentic AI that completes tasks using your computer for you; multimodal AI that can see and interact with you using a computer voice; machine reasoning models that take simple prompts and run them in loops repeatedly to guess what a sophisticated response might look like; browser-based AI that can scan any webpage and perform tasks for you. I’m not sure students are aware of any of what AI can do beyond interfaces like ChatGPT. The best thing any instructor can do is have a conversation with students to ask them if they are using AI and gauge how it is impacting their learning.

    Q: I want to dig into the AI “agents” a bit more. You had a recent post on this, as did Anna Mills, and I think it’s important for folks to know that these companies are purposefully developing and selling technology that can go into a Canvas course and start doing “work.” What are we to make of this in terms of how we think about designing courses?

    A: I think online assessment is generally broken at this point and won’t be saved. But online learning still has a chance and is something we should fight for. For all of its many flaws, online education has given people a valid pathway to a version of college education that they might not have been able to afford otherwise. There’s too many issues with equity and access to completely remove online from higher education, but that doesn’t mean we cannot radically think what it means to learn in online spaces. For instance, you can assign your students a process notebook in an online course that involves them writing by hand with pen and paper, then take a photograph or scan it and upload it. The [optical character recognition] function within many of the foundation models will be able to transcribe most handwriting into legible text. We can and should look for ways to give our students embodied experiences within disembodied spaces.

    Q: In her newsletter, Anna Mills calls on AI companies to collaborate on keeping students from deploying these agents in service of doing all their work for them. I’m skeptical that there’s any chance of this happening. I see an industry that seems happy to steamroll instructors, institutions and even students. Am I too cynical? Is there space for collaboration?

    A: There’s space for collaboration for sure, and limiting some of the more egregious use cases, but we also have to be realistic about what’s happening here. AI developers are moving fast and breaking things with each deployment or update, and we should be deeply skeptical when they come around to offer to sweep up the pieces, lest we forget how they became broken in the first place.

    Q: I’m curious if the development of the technology tracks what you would have figured a year or even longer, 18 months ago. How fast do you think this stuff is moving in terms of its capacities as they relate to school and learning? What do you see on the horizon?

    A: The problem we’re seeing is one of uncritical adoption, hype and acceleration. AI labs create a new feature or use case and deploy it within a few days for free or low cost, and industry has suddenly adopted this technique to bring the latest up-to-date AI features to enterprise products. What this means is the none-AI applications we’ve used for years suddenly get AI integrated into it, or if it has an AI feature, sees it rapidly updated.

    Most of these AI updates aren’t tested enough to be trusted outside of human in the loop assistance. Doing otherwise makes us all beta testers. It’s creating “work slop,” where companies are seeing employees using AI uncritically to often save time and produce error-laden work that then takes time and resources to address. Compounding things even more, it increasingly looks like the venture capital feeding AI development is one of the prime reasons our economy isn’t slipping into recession. Students and faculty find themselves at ground zero for most of this, as education looks like one of the major industries being impacted by AI.

    Q: One of the questions I often get when I’m working with faculty on campuses is what I think AI “literacy” looks like, and while I have my share of thoughts, I tend to pivot back to my core message, which is that I’m more worried about helping students develop their human capacities than teaching them how to work with AI. But let me ask you, what does AI literacy look like?

    A: I think AI literacy really isn’t about using AI. For me, I define AI literacy as learning how the technology works and understanding its impact on society. Using that definition, I think we can and should integrate aspects of AI literacy throughout our teaching. The working-with-AI-responsibly part, what I’d call AI fluency, has its place in certain classes and disciplines but needs to go hand in hand with AI literacy; otherwise, you risk uncritically adopting a technology with little understanding or demystifying AI and helping students understand its impact on our world.

    Q: Whenever I make a campus visit, I try to have a chance to talk to students about their AI use, and for the most part I see a lot of critical thinking about it, where students recognize many of the risks of outsourcing all of their work, but also share that within the system they’re operating in, it sometimes makes sense to use it. This has made me think that ultimately, our only response can be to treat the demand side of the equation. We’re not going to be able to police this stuff. The tech companies aren’t going to help. It’s on the students to make the choices that are most beneficial to their own lives. Of course, this has always been the case with our growth and development. What do you think we should be focused on in managing these challenges?

    A: My current thinking is we should teach students discernment when it comes to AI tools and likely ourselves, too. There’s no rule book or priors for us to call upon when we deal with a machine that mimics human intelligence. My approach is radical honesty with students and faculty. By that I mean the following: I cannot police your behavior here and no one else is going to do that, either. It is up to all of us to form a social contract and find common agreement about where this technology belongs in our lives and create clear boundaries where it does not.

    Source link

  • Why agentic AI matters now more than ever

    Why agentic AI matters now more than ever

    Key points:

    For years now, the promise of AI in education has centered around efficiency–grading faster, recommending better content, or predicting where a student might struggle.

    But at a moment when learners face disconnection, systems are strained, and expectations for personalization are growing, task automation feels…insufficient.

    What if we started thinking less about what AI can do and more about how it can relate?

    That’s where agentic AI comes in. These systems don’t just answer questions. They recognize emotion, learn from context, and respond in ways that feel more thoughtful than transactional. Less machine, more mentor.

    So, what’s the problem with what we have now?

    It’s not that existing AI tools are bad. They’re just incomplete.

    Here’s where traditional AI systems tend to fall short:

    • NLP fine-tuning
       Improves the form of communication but doesn’t understand intent or depth.
    • Feedback loops
       Built to correct errors, not guide growth.
    • Static knowledge bases
       Easy to search but often outdated or contextually off.
    • Ethics and accessibility policies
       Written down but rarely embedded in daily workflows.
    • Multilingual expansion
       Translates words, not nuance or meaning across cultures.

    These systems might help learners stay afloat. They don’t help them go deeper.

    What would a more intelligent system look like?

    It wouldn’t just deliver facts or correct mistakes. A truly intelligent learning system would:

    • Understand when a student is confused or disengaged
    • Ask guiding questions instead of giving quick answers
    • Retrieve current, relevant knowledge instead of relying on a static script
    • Honor a learner’s pace, background, and context
    • Operate with ethical boundaries and accessibility in mind–not as an add-on, but as a foundation

    In short, it would feel less like a tool and more like a companion. That may sound idealistic, but maybe idealism is what we need.

    The tools that might get us there

    There’s no shortage of frameworks being built right now–some for developers, others for educators and designers. They’re not perfect. But they’re good places to start.

    Framework Type Use
    LangChain Code Modular agent workflows, RAG pipelines
    Auto-GPT Code Task execution with memory and recursion
    CrewAI Code Multi-agent orchestration
    Spade Code Agent messaging and task scheduling
    Zapier + OpenAI No-code Automated workflows with language models
    Flowise AI No-code Visual builder for agent chains
    Power Automate AI Low-code AI in business process automation
    Bubble + OpenAI No-code Build custom web apps with LLMs

    These tools are modular, experimental, and still evolving. But they open a door to building systems that learn and adjust–without needing a PhD in AI to use them.

    A better system starts with a better architecture

    Here’s one way to think about an intelligent system’s structure:

    Learning experience layer

    • Where students interact, ask questions, get feedback
    • Ideally supports multilingual input, emotional cues, and accessible design

    Agentic AI core

    • The “thinking” layer that plans, remembers, retrieves, and reasons
    • Coordinates multiple agents (e.g., retrieval, planning, feedback, sentiment)

    Enterprise systems layer

    • Connects with existing infrastructure: SIS, LMS, content repositories, analytics systems

    This isn’t futuristic. It’s already possible to prototype parts of this model with today’s tools, especially in contained or pilot environments.

    So, what would it actually do for people?

    For students:

    • Offer guidance in moments of uncertainty
    • Help pace learning, not just accelerate it
    • Present relevant content, not just more content

    For teachers:

    • Offer insight into where learners are emotionally and cognitively
    • Surface patterns or blind spots without extra grading load

    For administrators:

    • Enable guardrails around AI behavior
    • Support personalization at scale without losing oversight

    None of this replaces people. It just gives them better support systems.

    Final thoughts: Less control panel, more compass

    There’s something timely about rethinking what we mean by intelligence in our learning systems.

    It’s not just about logic or retrieval speed. It’s about how systems make learners feel–and whether those systems help learners grow, question, and persist.

    Agentic AI is one way to design with those goals in mind. It’s not the only way. But it’s a start.

    And right now, a thoughtful start might be exactly what we need.

    Latest posts by eSchool Media Contributors (see all)

    Source link