Tag: Winning

  • Transparency, collaboration, and culture, are key to winning public trust in research

    Transparency, collaboration, and culture, are key to winning public trust in research

    The higher education sector is focussing too much on inward-facing debates on research culture and are missing out on a major opportunity to expose our culture to the public as a way to truly connect research with society.

    REF can underpin this outward turn, providing mechanisms not only for incentivising good culture, but for opening up conversations about who we are and how we work to contribute to society.

    This outward turn matters. Research and Development (R&D) delivers enormous economic and societal value, yet universities struggle to earn public trust or support for what they do. Recent nation-wide public opinion research by Campaign for Science and Engineering (CaSE) has shown that while 88 per cent of people say it is important for the Government to invest in R&D, just 18 per cent can immediately think of lots of ways R&D benefits them and their family. When talking about R&D in public focus groups, universities were rarely front of mind and are primarily seen as education institutions where students or lecturers might do R&D as an ancillary activity.

    If the university sector is to sustain legitimacy – and by extension, the political and financial foundations of UK research – we must find new ways to make our work visible, relatable, and trusted. Focusing on the culture that shapes how research is done may be the most powerful way to do this.

    Why culture matters

    Public opinion is not background noise. Public awareness, appetite and trust all shape political choices about funding, regulation, and the role of universities in national life. While CaSE’s work shows that 72 per cent of people trust universities to be honest about how much the UK government should invest in R&D, the lack of awareness about what universities do and how they do it leaves legitimacy fragile.

    This fragility is starkly illustrated by recent polling from More in Common: when asked which government budgets they would most like to see cut, the public didn’t want funding cuts for R&D, yet placed universities third on the list for budgets that they would be happy to be cut (alongside foreign aid and funding for the arts).

    Current approaches to improving public opinions about research in our sector have had limited success. The sector’s instinct has been to showcase outputs – discoveries, patents, and impact case studies – to boost public awareness and build support for research in universities. But CaSE polling evidence suggests that this approach isn’t cutting through: 74 per cent of the public said they knew nothing or hardly anything about R&D in their area. This lack of connection does not indicate a lack of interest: a similar proportion (70 per cent) would like to hear more about local R&D.

    Transparency

    Evidence from other sectors shows that opening up processes builds trust. In healthcare, for example, the NHS has found that when patients are meaningfully involved in decisions about their care and how services are designed, trust and satisfaction increase – not just because of outcomes, but because people can see and influence how decisions are made.

    Research from business and engineering contexts shows that people are more likely to trust companies that are open about how they operate, not just what they deliver. Together, these lessons reinforce that we should not rely on showcasing outputs alone: legitimacy comes from making visible the processes, people and cultures that underpin research.

    Universities don’t just generate knowledge; they develop the individuals who carry skills and values into the wider economy. Researchers, technicians, professional services staff and others who enable research in higher education bring curiosity, collaboration and critical thinking into every sector, both through direct collaboration and when they move beyond academia. These skills fuel innovation and problem-solving across the economy and public services, but they can only develop and thrive in supportive, inclusive research cultures. Without attention to culture, the talent pipeline that government and industry rely on is put at risk.

    Research culture makes these processes and people visible. Culture is about how research is done: the integrity of methods, the openness of data, the inclusivity of teams, the collaborations – including with the public – that make discoveries possible. These are the very things the public are keen to understand better. By opening up the black box of research and showing the culture that underpins it, we can make university research more relatable, trustworthy, and visible in everyday life.

    The role of REF in shifting the conversation

    The expansion of the old Environment element of REF to encompass broader aspects of research culture offers an opportunity to help shift from an inward to a more outward looking narrative and public conversation. The visibility and accountability that REF submissions require matters beyond academia: it gives the sector a platform to showcase the values and processes that underpin research. In doing so, REF can help our sector build trust and legitimacy by making research culture part of the national conversation about R&D.

    Openness, integrity, inclusivity, and collaboration – core components of research culture – are values which the public already recognise and expect. By framing research culture as part of the story we tell – explaining not just what our universities produce but how they produce it – we can build a stronger connection with the public. Culture is the bridge between the abstract notion of investing in R&D and a lived understanding of what universities actually do in society.

    Public support for research is strong, but support for universities is increasingly fragile. Whatever the REF looks like when we unpause, we need to avoid retreating to ‘business as usual’ and closing down this opportunity to open up a more meaningful conversation about the role universities play in UK R&D and in the progress of society.

    Source link

  • How Marketers are Winning With AI-Powered Search

    How Marketers are Winning With AI-Powered Search

    Search Has Changed. Has Your Strategy?

    Paid search marketing has always played a central role in how students find and engage with colleges and universities. But how students search and what they expect from the experience has fundamentally changed. Today’s Modern Learners are digital-first and highly discerning, which raises the stakes for any higher education marketing strategy, especially when it comes to search visibility. Modern Learners are not just typing in keywords; they’re asking complex questions and increasingly expect fast, relevant answers that feel tailored to their individual goals.

    In this new reality, search is no longer just a tool; it is your institution’s reputational front door. For many students, the first impression comes from your search presence—whether your institution appears at all, and what shows up when it does. This moment shapes how they perceive your brand and can influence their decision to engage further.

    With advancements such as Google’s AI Overview and AI Mode, the line between paid and organic results is disappearing. These features pull from multiple sources to deliver a single, curated response designed to satisfy intent rather than merely match keywords. This means your search strategy can no longer operate in silos. Paid and organic efforts must work in tandem, and both need to be structured around how students actually search, not how institutions are used to marketing.

    Yet, many institutions still rely on legacy paid search strategies that are fragmented and overly focused on isolated keywords. These outdated tactics often miss the nuance of modern search behavior, leading to underperformance and missed opportunities.

    This is especially critical during a time when marketing budgets are under pressure and visibility is harder to earn. To remain competitive, higher ed marketers need to reimagine paid search not as a list of bid terms or ad placements, but as a strategic channel that influences both enrollment outcomes and institutional reputation. What’s at stake isn’t just performance. It’s how your brand is perceived in the channels that matter most.

    Intent Is the New Currency of Paid Search 

    Paid search has long been valued for its ability to deliver results quickly and cost-effectively. But in today’s environment, true efficiency means more than just driving volume through simply targeting the right keywords. Today, successful campaigns are built around understanding and aligning with the why behind a student’s search, not just the what.

    That’s where intent becomes essential. Intent reveals what a prospective student is trying to accomplish, what stage of the decision process they’re in and what they expect from their educational experiences. With today’s AI-powered platforms, marketers can now interpret and respond to this intent with greater precision than ever before.

    Modern tools like Performance Max—Google’s fully automated, goal-based ad campaign—and Broad Match—its flexible keyword matching option—draw from a range of real-time signals like device type, browsing behavior, location, and time of day. These platforms use that context to determine not just who to reach, but how and when to deliver the most relevant message.

    This shift is especially important when engaging adult and online learners. These prospective students often search in short, focused bursts across devices and platforms. Intent-based targeting helps ensure your message appears at the right moment, when a prospective student is most open to taking the next step.

    The benefit goes beyond smarter targeting. Institutions that embrace intent-based strategies often see improved efficiency, stronger lead quality and a higher return on investment. More importantly, they’re creating a search experience that meets students where they are.

    For higher education marketers, this requires a mindset shift. Paid search is no longer about chasing keywords or building lengthy lists of terms. It’s about reading behavior, responding with context and building relevance. Those who adapt to this new model will be better positioned to influence outcomes and build lasting brand reputations.

    Why Over-Segmentation Hurts AI Performance 

    Aligning with student intent requires more than new tools—it requires rethinking how campaigns are structured. That’s where over-segmentation becomes a critical barrier. Not long ago, higher education marketing professionals found success by keeping campaigns tightly focused. You’d build detailed audience segments, carefully tailor your messaging and control every aspect of targeting. It worked well in a time when more control often meant better results.

    That playbook doesn’t hold up in today’s AI-driven paid media environment. In fact, over segmentation actively holds your campaigns back.

    AI performs best when it’s given space to learn and optimize. It needs strong signals, such as first-party data, clear conversion goals and smart bidding strategies, to work effectively. Overly narrow targeting and rigid parameters create inefficiencies and limit performance.

    That’s why marketers should focus less on segmentation and more on supplying clear, meaningful data that helps AI reach the right students and drive outcomes like increased inquiries and stronger application intent. 

    At the same time, student journeys have changed. Modern Learners aren’t moving through the funnel in linear paths. Ther research process is fast-paced and shaped by real-life pressures like work schedules, finances and family responsibilities.  

    Prospective students don’t just want more content—they want information that’s relevant to their needs and arrives when it matters most. Modern paid media strategies must move beyond simple demographics to focus on behaviors, intent and how students search. 

    Transforming Strategy Into Results

    As search evolves, so too must the role of the higher ed marketer. In today’s AI-driven landscape, students are exploring their options in more nuanced ways. To keep pace, marketing strategies must shift from keyword-first thinking to approaches that prioritize context, content and the student journey. Here’s how forward-thinking teams are putting that into action:

    Smarter, Simpler Campaign Structures for Effective Paid Search Strategy

    AI works best when it has strong signals to learn from. That means it’s often more effective to group campaigns by intent rather than breaking them up by individual programs or markets. For example, grouping similar programs together can help your budget go further by focusing on where there’s actual search demand, even if it means less control over specific program-level results.

    Content That Works Harder

    When you’re working in a keywordless environment, your content does the targeting. Search platforms rely on your landing pages, headlines and descriptions to understand what you offer and who you want to reach. That’s why clear, relevant content is critical.  The schools seeing the best results are the ones creating content that aligns with what students are actually searching for. 

    Making the Most of First-Party Data 

    Performance Max campaigns are especially powerful when they’re fueled by high-quality first-party data. Feeding in enrollment signals, audience segments and behavioral insights allows AI to deliver more personalized outreach across platforms. This enhances reach and efficiency without compromising targeting precision.

    Scaling with AI Max and Broad Match 

    New tools like AI Max are opening doors to even more automation. AI Max combines broad match, keywordless targeting and AI-generated creative to help schools reach students in AI-driven placements. Paired with the right paid search strategy, Broad Match helps your content appear in the natural, conversational queries students actually use. 

    Aligning Paid and Organic Strategies  

    The strongest higher education marketing strategies bring paid search marketing and organic search marketing under one roof. When teams align on landing pages, keywords and messaging, both channels amplify each other—driving more qualified traffic, improving conversions and boosting visibility across search results. This gives AI clearer context and helps create a smoother experience for students. 

    Continuous Testing and Learning 

    AI doesn’t mean putting things on autopilot. The best results come when marketers stay involved—testing creative, improving landing pages and updating their audience signals. All of that helps the AI learn and get better over time. 

    When campaigns are built around clear intent and fueled by rich data and relevant content, AI moves beyond automation—it becomes a strategic partner. This empowers institutions to reach the right students with precision, reduce wasted spend and create meaningful connections that drive enrollment success. 

    Harness AI to Amplify Your Team’s Impact 

    AI isn’t here to replace your marketing team. Instead, it helps them work smarter and focus on what really matters. AI tools take care of the routine tasks like adjusting bids, testing creative and targeting audiences in real time. This gives your marketers more time to concentrate on strategy, keeping your brand consistent, understanding student journeys and improving conversions.

    This partnership between marketers and AI is the future of higher ed marketing. Adapting your strategy to today’s search landscape helps strengthen both your enrollment pipeline and your brand foundation.

    At EducationDynamics, we think differently about AI’s potential to power higher education marketing teams by combining creativity, data-driven insight and technology to drive meaningful growth.

    This is more than just a new way to run campaigns. It’s a shift toward meeting students more effectively—aligning enrollment and brand goals in a way that builds trust, boosts visibility and drives lasting success.

    Source link

  • An Oklahoma Teacher Took a Leap of Faith. She Ended Up Winning State Teacher of the Year – The 74

    An Oklahoma Teacher Took a Leap of Faith. She Ended Up Winning State Teacher of the Year – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    OKLAHOMA CITY — Those who knew Melissa Evon the best “laughed really hard” at the thought of her teaching family and consumer sciences, formerly known as home economics.

    By her own admission, the Elgin High School teacher is not the best cook. Her first attempt to sew ended with a broken sewing machine and her mother declaring, “You can buy your clothes from now on.”

    Still, Evon’s work in family and consumer sciences won her the 2025 Oklahoma Teacher of the Year award on Friday. Yes, her students practice cooking and sewing, but they also learn how to open a bank account, file taxes, apply for scholarships, register to vote and change a tire — lessons she said “get kids ready to be adults.”

    “Even though most of my career was (teaching) history, government and geography, the opportunity to teach those real life skills has just been a phenomenal experience,” Evon told Oklahoma Voice.

    After graduating from Mustang High School and Southwestern Oklahoma State University, Evon started her teaching career in 1992 at Elgin Public Schools just north of Lawton. She’s now entering her 27th year in education, a career that included stints in other states while her husband served in the Air Force and a break after her son was born.

    No matter the state, the grade level or the subject, “I’m convinced I teach the world’s greatest kids,” she said.

    Her family later returned to Oklahoma where Evon said she received a great education in public schools and was confident her son would, too.

    Over the course of her career, before and after leaving the state, she won Elgin Teacher of the Year three times, district Superintendent Nathaniel Meraz said.

    So, Meraz said he was “ecstatic” but not shocked that Evon won the award at the state level.

    “There would be nobody better than her,” Meraz said. “They may be as good as her. They may be up there with her. But she is in that company of the top teachers.”

    Oklahoma Teacher of the Year Melissa Evon has won her district’s top teacher award three times. (Photo provided by the Oklahoma State Department of Education)

    Like all winners of Oklahoma Teacher of the Year, Evon will spend a year out of the classroom to travel the state as an ambassador of the teaching profession. She said her focus will be encouraging teachers to stay in education at a time when Oklahoma struggles to keep experienced educators in the classroom.

    Evon herself at times questioned whether to continue teaching, she said. In those moments, she drew upon mantras that are now the core of her Teacher of the Year platform: “See the light” by looking for the good in every day and “be the light for your kids.”

    She also told herself to “get out of the boat,” another way of saying “take a leap of faith.”

    Two years ago, she realized she needed a change if she were to stay in education. She wanted to return to the high-school level after years of teaching seventh-grade social studies.

    The only opening at the high school, though, was family and consumer sciences. Accepting the job was a “get out of the boat and take a leap of faith moment,” she said.

    “I think teachers have to be willing to do that when we get stuck,” Evon said. “Get out of the boat. Sometimes that’s changing your curriculum. Sometimes it might be more like what I did, changing what you teach. Maybe it’s changing grade levels, changing subjects, changing something you’ve always done, tweaking that idea.”

    Since then, she’s taught classes focused on interpersonal communication, parenting, financial literacy and career opportunities. She said her students are preparing to become adults, lead families and grow into productive citizens.

    And, sure, they learn cooking and sewing along the way.

    “I’m getting to teach those things, and I know that what I do matters,” Evon said. “They come back and tell me that.”

    Oklahoma Voice is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Oklahoma Voice maintains editorial independence. Contact Editor Janelle Stecklein for questions: [email protected].


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • How Three Institutions Built Winning Retention Programs – CUPA-HR

    How Three Institutions Built Winning Retention Programs – CUPA-HR

    by Julie Burrell | May 21, 2024

    New CUPA-HR data show some improvement in turnover in the higher ed workforce, but staffing hasn’t fully bounced back to pre-pandemic levels. Managers still face challenges filling positions and maintaining morale, while employees are seeking jobs where their satisfaction and well-being are prioritized.

    CUPA-HR’s recent webinar offers solutions that may move the needle on employee retention. Retaining Talent: Effective Employee Retention Strategies From Three Institutions brings together HR pros who showcase their high-impact, cost-effective approaches to increasing satisfaction and well-being, including:

    • Professional development programs driven by employees’ interests
    • Effective supervisor-employee communication, including stay interviews
    • Actionable campus climate surveys using liaisons
    • Mentoring programs and leadership pipelines
    • Recognition programs and community-building events
    • Employee Resource Groups to enhance belonging

    Here are some of the highlights from their programs.

    Stay Interviews at Drake University

    A stay interview is a structured, informal conversation between an employee and a trained supervisor — and can be key to retaining top talent. Maureen De Armond, executive director of human resources at Drake, considers stay interviews to be a critical tool that nevertheless go underused in higher ed. Overall, only 8% of employees stated that they participated in a stay interview in the past year, according to The CUPA-HR 2023 Higher Education Employee Retention Survey.

    De Armond stresses that stay interviews can build trust, increase communication, and show that you care about employees as people, not just their job performance. If you’re looking to get started, De Armond recommends checking out the Stay Interviews Toolkit.

    Actionable Climate Surveys at the University of Texas Rio Grande Valley

    What’s worse than not conducting a climate survey? Not doing anything with the answers employees have taken the time to provide, says Nicole Englitsch, organizational development manager at UTRGV. To make surveys actionable, they’ve enlisted campus climate liaisons.

    These liaisons, who are mostly HR professionals, are assigned to specific departments. The liaisons have been trained by their external survey partners to help their departments understand the results and engage in action planning, guided by a three-year timeline. This network of partners helps ensure that UTRGV’s goals of making survey results both transparent and actionable are achieved.

    For more on their employee engagement and retention efforts, see Building Leaders From Within: UT Rio Grande Valley Blends Leadership Development With a Master’s in Higher Ed Administration.

    Recognition and Community-Building at Rollins College

    How can institutions create a culture of belonging and valuing employees? David Zajchowski, director of human resources at Rollins, explains how their high-impact recognition and community-building programs range from informal coffee-and-doughnuts gatherings to special awards ceremonies for employees.

    Probably the most popular way of valuing employees while increasing connection is Rollins’s annual Fox Day. On a random day in spring, the president surprises employees and students with a day off from work and class to participate in community-building college traditions.

    Despite the effectiveness of employee recognition, many employers may be leaving this low-cost retention incentive on the table, as only 59% of higher ed employees said they received regular verbal recognition for their work in the Employee Retention Survey. Wondering how your employee recognition program stacks up? See a comparison of recognition programs and take a self-assessment here.



    Source link

  • Who Is Winning the Generative AI Race? Nobody (yet). –

    Who Is Winning the Generative AI Race? Nobody (yet). –

    This is a post for folks who want to learn how recent AI developments may affect them as people interested in EdTech who are not necessarily technologists. The tagline of e-Literate is “Present is Prologue.” I try to extrapolate from today’s developments only as far as the evidence takes me with confidence.

    Generative AI is the kind of topic that’s a good fit for e-Literate because the conversations about it are fragmented. The academic and technical literature is boiling over with developments on practically a daily basis but is hard for non-technical folks to sift through and follow. The grand syntheses about the future of…well…everything are often written by incredibly smart people who have to make a lot of guesses at a moment of great uncertainty. The business press has important data wrapped in a lot of WHEEEE!

    Let’s see if we can run this maze, shall we?

    Is bigger better?

    OpenAI and ChatGPT set many assumptions and expectations about generative AI, starting with the idea that these models must be huge and expensive. Which, in turn, means that only a few tech giants can afford to play.

    Right now there are five widely known giants. (Well, six, really, but we’ll get to the surprise contender in a bit.) OpenAI’s ChatGPT and Anthropic’s Claude are pure plays created by start-ups. OpenAI started the whole generative AI craze by showing the world how much anyone who can write English can accomplish with ChatGPT. Anthropic has made a bet on “ethical AI” with more protections from harmful output and a few differentiating features that are important for certain applications but that I’m not going to go into here.

    Then there are the big three SaaS hosting giants. Microsoft has been tied very tightly to OpenAI, of which it owns a 49% stake. Google, which has been a pioneering leader in AI technologies but has been a mess with its platforms and products (as usual), has until recently focused on promoting several of its own models. Amazon, which has been late out of the gate, has its own Titan generative AI model that almost nobody has seen yet. But Amazon seems to be coming out of the gate with a strategy that emphasizes hosting an ecosystem of platforms, including Anthropic and others.

    About that ecosystem thing. A while back, an internal paper called “We Have No Moat, and OpenAI Doesn’t Either.” leaked from Google. It made the argument that so much innovation was happening so quickly in open-source generative AI that the war chests and proprietary technologies of these big companies wouldn’t give them an advantage over the rapid innovation of a large open-source community.

    I could easily write a whole long post about the nature of that innovation. For now, I’ll focus on a few key points that should be accessible to everyone. First, it turns out that the big companies with oodles of money and computing power—surprise!—decided to rely on strategies that required oodles of money and computing power. They didn’t spend a lot of time thinking about how to make their models smaller and more efficient. Open-source teams with far more limited budgets quickly demonstrated that they could make huge gains in algorithmic efficiency. The barrier to entry for building a better LLM—money—is dropping fast.

    Complementing this first strategy, some open-source teams worked particularly hard to improve data quality, which requires more hard human work and less brute computing force. It turns out that the old adage holds: garbage in, garbage out. Even smaller systems trained on more carefully curated data are less likely to hallucinate and more likely to give high-quality answers.

    And third, it turns out that we don’t need giant all-purpose models all the time. Writing software code is a good example of a specialized generative AI task that can be accomplished well with a much smaller, cheaper model using the techniques described above.

    The internal Google memo concluded by arguing that “OpenAI doesn’t matter” while cooperating with open source is vital.

    That missive was leaked in May. Guess what’s happened since then?

    The swarm

    Meta had already announced in February that it was releasing an open-source-ish model called Llama. It was only open-source-ish because its license limited it to research use. That was quickly hacked and abused. The academic teams and smaller startups, which were already innovating like crazy, took advantage of the oodles of money and computing power that Meta was able to put into LLama. Unlike the other giants, Meta doesn’t make money by hosting software. They making from content. Commoditizing the generative AI will lead to much more content being generated. Perhaps seeing an opportunity, when Meta released LLama 2 in July, the only unusual restrictions they placed on the open-source license were to prevent big hosting companies like Amazon, Microsoft, and Google from making money off Llama without paying Meta. Anyone smaller than that can use the Llama models for a variety of purposes, including commercial applications. Importantly, LLama 2 is available in a variety of sizes, including one small enough to run on a newer personal computer.

    To be clear, OpenAI, Microsoft, Google, Anthropic, and Google are all continuing to develop their proprietary models. That isn’t going away. But at the same time…

    • Microsoft, despite their expensive continuing love affair with OpenAI, announced support for Llama 2 and has a license (but not announced products that I can find yet) for Databricks’ open-source Dolly 2.0.
    • Google Cloud is adding both LLama 2 and Anthropic’s Claude 2 to their list of 100 LLM models they support, including their own open-source Flan T-5 and PaLM LLMs.
    • Amazon now supports a growing range of LLMs, including open-source Stability AI and Llama 2.
    • IBM—’member them?—is back in the AI game, trying to rehabilitate its image after the much-hyped and mostly underwhelming Watson products. The company is trotting out watsonx (with the very now, very wow lower-case “w” at the beginning of the name and “x” at the end) integrated with HuggingFace, which you can think of as being a little bit like the Github for open-source generative AI.

    It seems that the Google memo about no moats, which was largely shrugged off publicly way back in May, was taken seriously privately by the major players. All the big companies have been hedging their bets and increasingly investing in making the use of any given LLM easier rather than betting that they can build the One LLM to Rule Them All.

    Meanwhile, new specialized and generalized LLMs pop up weekly. For personal use, I bounce between ChatGPT, BingChat, Bard, and Claude, each for different types of tasks (and sometimes a couple at once to compare results). I use DALL-E and Stable Diffusion for image generation. (Midjourney seems great but trying to use it through Discord makes my eyes bleed.) I’ll try the largest Llama 2 model and others when I have easy access to them (which I predict will be soon). I want to put a smaller coding LLM on my laptop, not to have it write programs for me but to have it teach me how to read them.

    The most obvious possible end result of this rapid sprawling growth of supported models is that, far from being the singular Big Tech miracle that ChatGPT sold us on with their sudden and bold entrance onto the world stage, generative AI is going to become just one more part of IT stack, albeit a very important one. There will be competition. There will be specialization. The big cloud hosting companies may end up distinguishing themselves not so much by being the first to build Skynet as by their ability to make it easier for technologists to integrate this new and strange toolkit into their development and operations. Meanwhile, a parallel world of alternatives for startups and small or specialized use will spring up.

    We have not reached the singularity yet

    Meanwhile, that welter of weekly announcements about AI advancements I mentioned before have not included massive breakthroughs in super-intelligent machines. Instead, many of them have been about supporting more models and making them easier to use for real-world development. For example, OpenAI is making a big deal out of how much better ChatGPT Enterprise is at keeping the things you tell it private.

    Oh. That would be nice.

    I don’t mean to mock the OpenAI folks. This is new tech. Years of effort will need to be invested into making this technology easy and reliable for the uses it’s being put to now. ChatGPT has largely been a very impressive demo as an enterprise application, while ChatGPT Enterprise is exactly what it sounds like; an effort to make ChatgGPT usable in the enterprise.

    The folks I talk to who are undertaking ambitious generative AI projects, including ones whose technical expertise I trust a great deal, are telling me they are struggling. The tech is unpredictable. That’s not surprising; generative AI is probabilistic. The same function that enables it to produce novel content also enables it to make up facts. Try QA testing an application like that and avoiding regressions—i.e., bugs you thought you fixed but came back in the next version—using technology like that. Meanwhile, the toolchain around developing, testing, and maintaining generative AI-based software is still very immature.

    These problems will be solved. But if the past six months have taught us anything, it’s that our ability to predict the twists and turns ahead is very limited at the moment. Last September, I wrote a piece called “The Miracle, the Grind, and the Wall.” It’s easy to produce miraculous-seeming one-off results with generative AI but often very hard to achieve them reliably at scale. And sometimes we hit walls that prevent us from reaching goals for reasons that we don’t see coming. For example, what happens when you run a data set that has some very subtle problems with it through a probabilistic model with half a trillion computing units, each potentially doing something with the data that is impacted by the problems and passing the modified problematic data onto other parts of the system? How do you trace and fix those “bugs” (if you even call them that).

    It’s fun to think about where all of this AI stuff could go. And it’s important to try. But personally, I find the here-and-now to be fun and useful to think about. I can make some reasonable guesses about what might happen in the next 12 months. I can see major changes and improvements AI can contribute to education today that minimize the risk of the grind and the wall. And I can see how to build a curriculum of real-world projects that teaches me and others about the evolving landscape even as we make useful improvements today.

    What I’m watching for

    Given all that, what am I paying attention to?

    • Continued frantic scrambling among the big tech players: If you’re not able to read and make sense of the weekly announcements, papers, and new open-source projects, pay attention to Microsoft, Amazon, Google, IBM, OpenAI, Anthropic, and HuggingFace. The four traditional giants in particular seem to be thrashing a bit. They’re all tracking the developments that you and I can’t and are trying to keep up. I’m watching these companies with a critical eye. They’re not leading (yet). They’re running for their lives. They’re in a race. But they don’t know what kind of race it is or which direction to go to reach the finish line. Since these are obviously extremely smart people trying very hard to compete, the cracks and changes in their strategies tell us as much as the strategies themselves.
    • Practical, short-term implementations in EdTech: I’m not tracking grand AI EdTech moonshot announcements closely. It’s not that they’re unimportant. It’s that I can’t tell from a distance whose work is interesting and don’t have time to chase every project down. Some of them will pan out. Most won’t. And a lot of them are way too far out over their skis. I’ll wait to see who actually gets traction. And by “traction,” I don’t mean grant money or press. I mean real-world accomplishments and adoptions.

      On the other hand, people who are deploying AI projects now are learning. I don’t worry too much about what they’re building, since a lot of what they do will be either wrong, uninteresting, or both. Clay Shirky once said the purpose of the first version of software isn’t to find out if you got it right; it’s to learn what you got wrong. (I’m paraphrasing since I can’t find the original quote.) I want to see what people are learning. The short-term projects that are interesting to me are the experiments that can teach us something useful.

    • The tech being used along with LLMs: ChatGPT did us a disservice by convincing us that it could soon become an all-knowing, hyper-intelligent being. It’s hard to become the all-powerful AI if you can’t reliably perform arithmetic, are prone to hallucinations, can’t remember anything from one conversation to the next, and start to space out if a conversation runs too long. We are being given the impression that the models will eventually get good enough that all these problems will go away. Maybe. For the foreseeable future, we’re better off thinking about them as interfaces with other kinds of software that are better at math, remembering, and so on. “AI” isn’t a monolith. One of the reasons I want to watch short-term projects is that I want to see what other pieces are needed to realize particular goals. For example, start listening for the term “vector database.” The larger tech ecosystem will help define the possibility space.
    • Intellectual property questions: What happens if The New York Times successfully sues OpenAI for copyright infringement? It’s not like OpenAI can just go into ChatGPT and delete all of those articles. If intellectual property law forces changes to AI training, then the existing models will have big problems (though some have been more careful than others). A chorus of AI cheerleaders tell us, “No, that won’t happen. It’s covered by fair use.” That’s plausible. But are we sure? Are we sure it’s covered in Europe as well as the US? How much should one bet on it? Many subtle legal questions will need to be sorted over the coming several years. The outcomes of various cases will also shape the landscape.
    • Microchip shortages: This is a weird thing for me to find myself thinking about, but these large generative AI applications—especially training them—run on giant, expensive GPUs. One company, NVidia, has far and away the best processors for this work. So much so that there is a major race on to acquire as many NVidia processors as possible due to limited supply and unlimited demand. And unlike software, a challenger company can’t shock the world with a new microprocessor that changes the world overnight. Designing and fabricating new chips at scale takes years. More than two. Nvidia will be the leader for a long time. Therefore, the ability for AI to grow will be, in some respects, constrained by the company’s production capacity. Don’t believe me? Check out their five-year stock price and note the point when generative AI hype really took off.
    • AI on my laptop: On the other end of the scale, remember that open-source has been shrinking the size of effective LLMs. For example, Apple has already optimized a version of Stable Diffusion for their operating system and released an open-source one-click installer for easier consumer use. The next step one can imagine is for them to optimize their computer chip—either the soon-to-be-released M3 or the M4 after it. (As I said, computer chips take time.) But one can easily imagine image generation, software code generation, and a chatbot that understands and can talk about the documents you have on your hard drive. All running locally and privately. In the meantime, I’ll be running a few experiments with AI on my laptop. I’ll let you know how it goes.

    Present is prologue

    Particularly at this moment of great uncertainty and rapid change, it pays to keep your eyes on where you’re walking. A lot of institutions I talk to either are engaged in 57 different AI projects, some of which are incredibly ambitious, or are looking longingly for one thing they can try. I’ll have an announcement on the latter possibility very shortly (which will still work for folks in the former situation). Think about these early efforts as CBE for the future work. The thing about the future is that there’s always more of it. Whatever the future of work is today will be the present of work tomorrow. But there will still be a future of work tomorrow. So we need to build a continuous curriculum of project-based learning with our AI efforts. And we need to watch what’s happening now.

    Every day is a surprise. Isn’t that refreshing after decades in EdTech?

    Source link