Tag: Future

  • The Future of Online Learning Brands

    The Future of Online Learning Brands

    Embracing a “One School” Approach for a Better Student Experience

    Let’s draw a line in the sand. On one side, we have a university campus and its on-ground offerings. On the other side, we have the digital higher education space and the online programs that live within it. 

    Traditionally, this line has been stark and rigid, with universities treating the two modalities as separate entities with dedicated teams, technology, systems, budgets, and strategies. 

    The initial separation was, in part, driven by the perception of online education as a lesser counterpart to its on-ground equivalent. This view may have held some truth in the early stages of digital learning. But the division has come with a cost, as institutions have had to do double the work, which is inefficient. 

    We can all see that significant changes are underway. Traditional educational boundaries are fading, with online learning gaining respect and sophistication. There are online programs that outpace their on-ground counterparts in quality and rigor. We’re looking at a future where traditional, hybrid, and online modalities are integrated, balancing both quality and accessibility. 

    As we leave the comfort of land and head out to sea, embracing a holistic approach is the way forward for universities.

    Separation Comes at a Cost 

    The traditional division between on-ground and online learning modalities increases costs and complicates operations for institutions, weakening their ability to present a unified, powerful brand to prospective students. Here are a few of the pain points: 

    Fragmented Systems

    Multiple Platforms: Utilizing different customer relationship management (CRM) systems, student information systems (SIS), and learning management systems (LMS) introduces inefficiencies. Each platform requires its own set of training, maintenance, and integration protocols. Those protocols often don’t integrate well, either.

    Increased Costs: The need to support various tech stacks and administrative systems significantly drives up operational costs, as resources are duplicated across the board.

    Conflicting Marketing Strategies

    Brand Fragmentation: With separate marketing teams for its on-ground and online programs, an institution risks sending mixed messages to potential students. This can lead to brand dilution and confusion about what the university stands for.

    Measurement Challenges: Disparate strategies make it difficult to track and analyze the effectiveness of marketing efforts. This makes the decisions on where to invest marketing dollars effectively difficult.

    Diluted Resources

    Split Focus: Dividing an institution’s time, talent, and budget between its on-ground and online initiatives means neither receives the full investment needed to thrive. This can result in underperforming programs that fail to meet their potential.

    By managing resources under one unified strategy, universities can maximize the impact of their educational offerings, ensuring that both online and on-ground programs benefit from full institutional support and cohesion.              

    Advances in Online Learning Have Closed the Quality Gap 

    Technology is rapidly advancing, and higher ed is keeping pace with the changes. As institutions become more skilled at applying learning technologies, the following shifts have occurred: 

    Today, online courses match on-ground courses in their rigor and depth and offer the flexibility and accessibility that modern students demand. It’s a win-win. The shift isn’t just about maintaining academic standards; it’s about enhancing them to make education more inclusive and adaptable to students’ varied lifestyles.             

    The Case for a “One School” Strategy 

    As the distinction between online and on-ground academic quality becomes murkier, more universities are beginning to embrace a “one school” strategy. This holistic approach integrates online and on-ground modalities into a single, unified brand, ensuring a seamless and coherent student experience. 

    It’s kind of like how my son doesn’t see the athletics department, student advising, and his faculty members as being on different teams with different budget sources. They all make up one thing — his university and the way it feels to be a student. 

    By operating under a single brand, universities can streamline their processes, unify their messaging, and bolster their identity, enhancing their appeal in a competitive educational market. The unified brand experience provides students with a consistent set of resources and support mechanisms, which proves crucial in building trust and satisfaction.

    The shift toward a one school strategy also aligns with the evolving preferences and expectations of students, particularly their growing desire for flexible learning environments. Modern students increasingly favor hybrid experiences — asynchronous learning modules combined with synchronous meetings. This allows them to manage their schedules while benefiting from real-time interactions. 

    Adopting this approach not only improves the overall experience for students but also positions institutions to more effectively manage their resources, enhance their operational efficiency, and strengthen their academic offerings across the board, redefining the educational experience to be more inclusive and adaptable to today’s learners. 

    Adopting a one school approach helps universities accomplish goals such as the following:

    1. Establish a Unified Systems and Technology Stack

    Currently, the existence of different application systems for different modalities often leads to disparate experiences and management challenges, increasing the risk of students falling through the cracks. A unified technology stack can address these issues, fostering a more integrated and seamless educational environment.

    Using the same CRM and SIS systems across an organization can significantly streamline operations in all areas, from marketing through student retention. This unification not only reduces operational costs but also consolidates institutional data, enabling more effective tracking and support of student activities. 

    2. Create an Integrated Marketing Strategy

    Universities often work with multiple marketing agencies that compete against each other using similar keywords but with slightly different visuals and landing pages. Bad idea. This not only dilutes the marketing efforts but also creates confusion for students who are comparing programs. 

    An integrated approach helps streamline these efforts, ensuring a cohesive, clear marketing message that effectively attracts and retains students.

    3. Align Academic and Enrollment Calendars 

    A particularly troubling symptom of separate identities within a university is differing enrollment calendars for online and on-ground offerings. Online programs typically offer more start dates throughout the year. 

    With a single enrollment calendar, however, universities can eliminate this confusion and simplify the experience for students who might engage in both modalities. Additionally, as faculty members frequently teach in both online and on-ground formats, a unified calendar ensures that all students have equal access to faculty resources, regardless of the learning format. 

    A Note on Organizational … Resistance 

    While the theoretical benefits of integrating online and on-ground educational modalities are clear, the practical implementation can face organizational resistance. This stems from the “this is the way we’ve always done it” mindset, presenting real challenges in terms of system integration and cultural adaptation. 

    Addressing these challenges requires a strategic approach and readiness to tackle potential roadblocks. Here are a few things to keep in mind:

    You Don’t Have to Implement the One School Model Alone

    Starting the journey toward overhauling the outdated model and creating a unified experience can be complex and challenging, but you don’t have to navigate it alone. 

    Archer Education is equipped to empower your institution at every step with our growth enablement approach, offering expert guidance in storytelling, technology, audience insights, and data analytics to support a seamless transition to the one school model. Then, once things are up and running, you’ll have the internal knowledge and capacities you need to cast us out to sea. 

    Contact us to learn more about how we can help you integrate your educational offerings and maximize the potential of your institution.

    Subscribe to the Higher Ed Marketing Journal:

    Source link

  • the future of learning design. – Sijen

    the future of learning design. – Sijen

    There is a looming skills deficit across all disciplines currently being taught in Universities today. The vast majority of degree programmes are, at best, gradual evolutions of what has gone before. At their worst they are static bodies of knowledge transmission awaiting a young vibrant new member of faculty to reignite them. Internal reviews are too often perfunctory exercises, seldom challenging the future direction of graduates as long as pass rates are sustained. That is until is to late and failure rates point to a ‘problem’ at a fundamental level around a degree design.

    We, collectively, are at the dawn of a new knowledge-skills-cognition revolution. The future of the professionals has been discussed for some years now. It will be a creeping, quiet, revolution (Susskind and Susskind, 2017). Although we occasionally hear about some fast food business firing all of its front-of-house staff in favour of robotic manufacturing processes and A.I. Ordering services, the reality is that in the majority of contexts the intelligent deployment of A.I. to enhance business operations requires humans to describe how these systems operate with other humans. This is because at present none of these systems score highly on any markers or Emotional Intelligence or EQ.

    Image generaed by Windows Copilot

    Arguably it has become increasingly important to ensure that graduates from any and all disciplines have been educated as to how to describe what they do and why they do it. They need to develop a higher degree of comfort with articulating each thought process and action taken. To do this we desperately need course and programme designers to desist from just describing (and therefore assessing) purely cognitive (intellectual) skills as described by Bloom et.al, and limit themselves to one or two learning outcomes using those formulations. Instead they need to elevate the psychomotor skills in particular, alongside an increasing emphasis on interpersonal ones.

    Anyone who has experimented with prompting any large language model (LLM) will tell you the language used falls squarely under the psychomotor domain. At the lowest levels one might ask to match, copy, imitate, then at mid-levels of skill deployment one might prompt a system to organise, calibrate, compete or show, rating to the highest psychomotor order of skills to ask A.I. systems to define, specify, even imagine. This progressive a type of any taxonomy allows for appropriate calibration of input and output. The ability to use language, to articulate, is an essential skill. There are some instructive (ad entertaining) YouTube videos of parents supporting their children to write instructions (here’s a great example), a skill that is seldom further developed as young people progress into tertiary studies.

    Being able to assess this skill is also challenging. When one was assessing text-based comprehension, even textual analysis, then one could get away with setting an essay question and having a semi-automated process for marking against a rudimentary rubric. Writing instructions, or explanations, of the task carried out, is not the same as verbally describing the same task. Do we imagine that speech recognition technology won’t become an increasingly part of many productive job roles. Not only do courses and programmes need to be designed around a broader range of outcomes, we also need to be continuously revising our assessment opportunities for those outcomes.

    References

    Susskind, R., & Susskind, D. (2017). The Future of the Professions: How Technology Will Transform the Work of Human Experts (Reprint edition). OUP Oxford.

    Source link

  • Equipping the Future – New Mexico Education

    Equipping the Future – New Mexico Education

    In the heart of Albuquerque’s west side, a new beacon of hope for elementary education is set to rise: Equip Academy of New Mexico.

    Spearheaded by Mercy Herrera, a Yale graduate with deep New Mexico roots, the school is designed to empower Kindergarten through 5th grade students through a unique blend of high academic expectations and culturally responsive teaching. With a personal history marked by overcoming educational challenges, Herrera is bringing her passion and vision to Equip Academy, aiming to equip every child with the tools to live out their greatness.

    On August 21 Equip Academy received unanimous approval to open as a charter school from the Public Education Commission.

    The school is set to open on Albuquerque’s west side in August 2025, with a focus to help improve student achievement and support the academic success of all students. This comes out of experience, as Herrera’s own academic journey was anything but straightforward.

    Raised in a family that moved frequently due to financial instability and personal challenges, Herrera attended multiple elementary schools, making it difficult to establish a strong academic foundation. “College seemed super out-of-reach,” she recalled, but her determination led her to Central New Mexico College (CNM), where she began to rebuild her academic confidence.

    After transferring to the University of New Mexico (UNM) and excelling in a Sign Language Interpreting Program, Herrera’s educational path took her to Harvard, where she presented research on translating scriptural metaphors from English to American Sign Language (ASL). This experience eventually led her to Yale University, where she earned her master’s degree in Disability Studies and Biblical Literature.

    In applying for Yale, Herrera didn’t tell a soul. She almost didn’t believe that someone like her, who struggled in school, could elevate to such a college. And yet, Herrera got in.

    Despite her achievements, Herrera never forgot her New Mexico roots or the struggles she faced growing up.

    Reflecting on the 2018 Yazzie-Martinez decision, which highlighted the state’s failure to provide an adequate education to many of its students, Herrera acknowledged that she would have been classified as a Yazzie-Martinez student.

    “My story isn’t unique,” Herrera said, “it’s common.”

    With support from mentors who believed in her, Herrera found the importance of quality education in shifting the narrative for students from backgrounds like hers. With the support, she made it to CNM, graduated UNM, attended an Ivy League, and earned a second masters in the Science of Teaching from New York City’s Pace University. It is this experience, and the belief that New Mexico’s students deserve to succeed, that drives the vision and mission of Equip Academy.

    “Every child has the opportunity to live out their greatness, and our commitment is to equip them to do so,” Herrera said, quoting the school’s vision.

    Equip Academy aims to provide a joyful and engaging environment with high expectations that prioritizes measurable academic learning while celebrating student curiosity and community, regardless of that student’s background.

    A key aspect of Equip Academy’s approach is its commitment to culturally responsive education. Understanding the diverse cultural landscape of New Mexico, Herrera has integrated culturally respectful education efforts into the school’s curriculum. “New Mexico has so much richness and beauty, and I think it took me leaving to understand that,” she said.

    To ensure the school is responsive to students across all walks of life, Herrera is working closely with the Hispanic Cultural Center, National Institute of Flamenco, Indian Pueblo Cultural Center and utilizing resources from the Native American Community Academy (NACA) to ensure that the school’s curriculum respects and reflects the cultural heritage of its students.

    To support students academically, Equip Academy will implement a two-teacher model for kindergarten and first grade, allowing for more individualized attention. As part of her background, Herrera has worked as a teacher instructional coach and has made teacher support a key for the school’s success.

    The school will also use cross-grade, flexible guided reading groups to ensure that students receive instruction at their individual “just right” level, helping them progress academically. Herrera emphasizes the importance of data-driven instruction and teacher excellence, which will be central to the school’s success.

    Herrera’s return to New Mexico came after years of working in high-performing charter schools in New York City and driven by a desire to bring the same level of educational excellence to her home state. The experience shaped her vision for Equip Academy, prompting her to say, “I don’t know how, and I don’t know when, but I want to start a charter school in New Mexico.”

    Now, that vision is becoming a reality.

    Equip Academy plans to open with two kindergarten classes and one first-grade class, eventually growing to serve 450 students from kindergarten through fifth grade. The school will operate on a slow-growth model, adding one grade level each year to ensure that students receive a consistent and high-quality education throughout their elementary years.

    As Herrera prepares for Equip Academy’s opening, she remains focused on the bigger picture: equipping students with the knowledge and skills they need to dream audaciously, engage deeply, and pursue lives of purpose. Her journey from a struggling student to an educational leader is proof that, with the right support and opportunities, New Mexico’s students can achieve greatness.

    Herrera’s words and hope for Equip Academy’s incoming students, “Believe in yourself, know what you want to do, and pursue it with everything you’ve got. With the right support, anything is possible.”

    Equip Academy is now accepting interest forms for future teachers and students. For more information, visit the Equip Academy LinkedIn page.

    Source link

  • Embracing the Future of HR: Your AI Questions Answered – CUPA-HR

    Embracing the Future of HR: Your AI Questions Answered – CUPA-HR

    by Julie Burrell | April 16, 2024

    In his recent webinar for CUPA-HR, Rahul Thadani, senior executive director of HR information systems at the University of Alabama at Birmingham, answered some of the most frequently raised questions about AI in HR. He also spoke to the most prevalent worries, including concerns about data privacy and whether AI will compete with humans for jobs.

    In addition to covering the basics on AI and how it works, Thadani addressed questions about the risks and rewards of using AI in HR, including:

    • How can AI speed up productivity now?
    • What AI tools should HR be using?
    • How well is AI integrated into enterprise software?
    • What are the risks and downsides of using AI?
    • What role will AI play in the future of HR?

    Thadani also put to rest a common fear about AI: that it will replace human jobs. He believes that HR is too complex, too fundamentally human a role to be automated. AI only simulates human intelligence, but it can’t make human decisions. Thadani reminded HR pros, “you all know how complex humans are, how complex decision-making is for humans.” AI can’t understand “the many components that go into hiring somebody,” for example, or how to measure employee engagement.

    AI won’t replace skilled HR professionals, but HR can’t afford to ignore AI. Thadani and other AI leaders stress that HR has a critical role to play in how AI is used on campuses. As the people experts, HR must have a seat at the table in AI discussions, partnering with IT and leadership on decisions such as how employees’ data are used and which AI software to test and purchase.

    Take the First Step

    Most people are just getting started on their AI journey. As a first step for those new to AI, Thadani recommends signing up for a ChatGPT account or another chatbot, like Google’s Gemini. He suggests using your private email account in case you need to sign a privacy agreement that doesn’t align with your institution’s policies. Test out what these chatbots are capable of by using this quick guide to chatbots.

    For leaders and supervisors, Thadani proposes having ongoing conversations within your department, on your campus and with your leadership. Some questions to consider in these conversations: Does your campus have an AI governance council? If so, is HR taking part? Do you have internal AI guidelines in place to protect data and privacy, in your department or for your campus? If not, do you have a plan to develop them? (As a leader in the AI space, the University of Michigan has AI guidelines that provide a good model, and are broken down into staff, faculty and student guidance categories.) Have you identified thought leaders in AI in your office or on your campus who can spur discussions and recommend best practices?

    In HR, “there’s definitely an eagerness to be ready and be ahead of the curve” when it comes to AI, Thadani noted. AI will undoubtedly be central to the future of work, and it’s up to HR to proactively guide how AI can be leveraged in ethical and responsible ways.

    HR-Specific Resources on AI



    Source link

  • Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    As we prepare for an upcoming year, I have to stop and think about the future of higher education. The pandemic changed our students, faculty, staff, and our campus as a whole. The Education Advisory Board (EAB) provides colleges and universities across the country with resources and ideas to help the students of the future.

    I confess, I have been a complete fan of EAB and their resources for the past ten years. Their resources are at the forefront of higher education innovation.

    🏛 – Dining Halls and Food Spaces

    🏛 – Modern Student Housing

    🏛 – Hybrid and Flexible Office Spaces

    🏛 – Tech-Enabled Classrooms

    🏛 – Libraries and Learning Commons

    🏛 – Interdisciplinary Research Facilities


    Higher education institutions should also focus on the faculty and staff as well. When I ask most of my peers if they are comfortable with the numerous changes happening across their institution, most of them are uncomfortable. We need to prepare our teams for the future of higher education. 

    Here’s the Millennial Professor’s Call the Action Statements for the Higher Education Industry

    🌎 – Higher Education Conferences and Summits Need to Provide Trainings Focused on Artificial Intelligence (AI) for Their Attendees

    🌎 – Higher Education Institutions Need to Include Faculty and Staff as Part of Their Planning Process (an Important Part)

    🌎 – Higher Education Institutions Provide Wellness and Holistic Support for Faculty and Staff Who are Having Problems With Change (You Need Us and We Need Help)

    🌎 – Higher Education Institutions Need to Be Comfortable with Uncommon Spaces (Flexible Office Spaces)

    🌎 – Faculty Need to Embrace Collaboration Opportunities with Faculty at Their Institutions and Other Institutions

    Here are some additional articles about the future of higher education:

    Higher education will continue to transition in an effort to meet the needs of our current and incoming students. 

    For our particular university, we are striving to modify all of these items simultaneously. It is a challenge, but the changes are well worth the journey.

    Here’s the challenge for this post: “In your opinion, which one of the items on the list is MOST important for your institution?”

    ***. 

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Generative AI and the Near Future of Work: An EdTech Example –

    Generative AI and the Near Future of Work: An EdTech Example –

    A friend recently asked me for advice on a problem he was wrestling with related to an issue he was having with a 1EdTech interoperability standard. It was the same old problem of a standard not quite getting true interoperability because people implement it differently. I suggested he try using a generative AI tool to fix his problem. (I’ll explain how shortly.)

    I don’t know if my idea will work yet—he promised to let me know once he tries it—but the idea got me thinking. Generative AI probably will change EdTech integration, interoperability, and the impact that interoperability standards can have on learning design. These changes, in turn, impact the roles of developers, standards bodies, and learning designers.

    In this post, I’ll provide a series of increasingly ambitious use cases related to the EdTech interoperability work of 1EdTech (formerly known as IMS Global). In each case, I’ll explore how generative could impact similar work going forward, how it changes the purpose of interoperability standards-making, and how it impacts the jobs and skills of various people whose work is touched by the standards in one way or another.

    Generative AI as duct tape: fixing QTI

    1EdTech’s Question Test Interoperability (QTI) standard is one of its oldest standards that’s still widely used. The earliest version on the 1EdTech website dates back to 2002, while the most recent version was released in 2022. You can guess from the name what it’s supposed to do. If you have a test, or a test question bank, in one LMS, QTI is supposed to let you migrate it into another without copying and pasting. It’s an import/export standard.

    It never worked well. Everybody has their own interpretation of the standard, which means that importing somebody else’s QTI export is never seamless. When speaking recently about QTI to a friend at an LMS company, I commented that it only works about 80% of the time. My friend replied, “I think you’re being generous. It probably only works about 40% of the time.” 1EdTech has learned many lessons about achieving consistent interoperability in the decades since QTI was created. But it’s hard to fix a complex legacy standard like this one.

    Meanwhile, the friend I mentioned at the top of the post asked me recently about practical advice for dealing with this state of affairs. His organization imports a lot of QTI question banks from multiple sources. So his team spends a lot of time debugging those imports. Is there an easier way?

    I thought about it.

    “Your developers probably have many examples that they’ve fixed by hand by now. They know the patterns. Take a handful of before and after examples. Embed them into a prompt in a generative AI that’s good at software code, like Hugging Chat. [As I was drafting this post, OpenAI announced that ChatGPT now has a code interpreter.] “Then give the generative AI a novel input and see if it produces the correct output.”

    Generative AI are good at pattern matching. The differences in QTI implementations are likely to have patterns to them that an LLM can detect, even if those differences change over time (because, for example, one vendor’s QTI implementation changed over time).

    In fact, pattern matching on this scale could work very well with a smaller generative AI model. We’re used to talking about ChatGPT, Google Bard, and other big-name systems that have between half a billion and a billion transformers. Think of transformers as computing legos. One major reason that ChatGPT is so impressive is that it uses a lot of computing legos. Which makes it expensive, slow, and computationally intensive. But if your goal is to match patterns against a set of relatively well-structured set of texts such as QTI files, you could probably train a much smaller model than ChatGPT to reliably translate between implementations for you. The smallest models, like Vicuña LLM, are only 7 billion transformers. That may sound like a lot but it’s small enough to run on a personal computer (or possibly even a mobile phone). Think about it this way: The QTI task we’re trying to solve for is roughly equivalent in complexity to the spell-checking and one-word type-ahead functions that you have on your phone today. A generative AI model for fixing QTI imports could probably be trained for a few hundred dollars and run for pennies.

    This use case has some other desirable characteristics. First, it doesn’t have to work at high volume in real time. It can be a batch process. Throw the dirty dishes in the dishwasher, turn it on, and take out the clean dishes when the machine shuts off. Second, the task has no significant security risks and wouldn’t expose any personally identifiable information. Third, nothing terrible happens if the thing gets a conversion wrong every now and then. Maybe the organization would have to fix 5% of the conversions rather than 100%. And overall, it should be relatively cheap. Maybe not as cheap as running an old-fashioned deterministic program that’s optimized for efficiency. But maybe cheap enough to be worth it. Particularly if the organization has to keep adding new and different QTI implementation imports. It might be easier and faster to adjust the model with fine-tuning or prompting than it would be to revise a set of if/then statements in a traditional program.

    How would the need for skilled programmers change? Somebody would still need to understand how the QTI mappings work well enough to keep the generative AI humming along. And somebody would have to know how to take care of the AI itself (although that process is getting easier every day, especially for this kind of a use case). The repetitive work they are doing now would be replaced by the software over time, freeing up the human brains for other things that human brains are particularly good at. In other words, you can’t get rid of your programmer but you can have that person engaging in more challenging, high-value work than import bug whack-a-mole.

    How does it change the standards-making process? In the short term, I’d argue that 1EdTech should absolutely try to build an open-source generative AI of the type I’m describing rather than trying to fix QTI, which is a task they’ve not succeeded in doing over 20 years. This strikes me as a far shorter path to achieving the original purpose for which QTI was intended, which is to move question banks from one system to another.

    This conclusion, in turn, leads to a larger question: Do we need interoperability standards bodies in the age of AI?

    My answer is a resounding “yes.”

    Going a step further: software integration

    QTI provides data portability but not integration. It’s an import/export format. The fact that Google Docs can open up a document exported from Microsoft Word doesn’t mean that the two programs are integrated in any meaningful way.

    So let’s consider Learning Tool Interoperability (LTI). LTI was quietly revolutionary. Before it existed, any company building a specialized educational tool would have to write separate integrations for every LMS.

    The nature of education is that it’s filled with what folks in the software industry would disparagingly call “point solutions.” If you’re teaching students how to program in python, you need a python programming environment simulator. But that tool won’t help a chemistry professor who really needs virtual labs and molecular modeling tools. And none of these tools are helpful for somebody teaching English composition. There simply isn’t a single generic learning environment that will work well for teaching all subjects. None of these tools will ever sell enough to make anybody rich.

    Therefore, the companies that make these necessary niche teaching tools will tend to be small. In the early days of the LMS, they couldn’t afford to write a separate integration for every LMS. Which meant that not many specialized learning tools were created. As small as these companies’ target markets already were, many of them couldn’t afford to limit themselves to the subset of, say, chemistry professors whose universities happened to use Blackboard. It didn’t make economic sense.

    LTI changed all that. Any learning tool provider could write integration once and have their product work with every LMS. Today, 1EdTech lists 240 products that are officially certified as supporting LTI interoperability standard. Many more support the standard but are not certified.

    Would LTI have been created in a world in which generative AI existed? Maybe not. The most straightforward analogy is Zapier, which connects different software systems via their APIs. ChatGPT and its ilk could act as instant Zapier. A programmer using generative AI could use the API documentation of both systems, ask the generative AI to write integration to perform a particular purpose, and then ask the same AI for help with any debugging.

    Again, notice that one still needs a programmer. Somebody needs to be able to read the APIs, understand the goals, think about the trade-offs, give the AI clear instructions, and check the finished program. The engineering skills are still necessary. But the work of actually writing the code is greatly reduced. Maybe by enough that generative AI would have made LTI unnecessary.

    But probably not. LTI connections pass sensitive student identity and grade information back and forth. It has to be secure and reliable. The IT department has legal obligations, not to mention user expectations, that a well-tested standard helps alleviate (though not eliminate). On top of that, it’s just a bad idea to have spread bits of glue code here, there, and everywhere, regardless of whether a human or a machine writes it. Somebody—an architect—needs to look at the big picture. They need to think about maintainability, performance, security, data management, and a host of other concerns. There is value in having a single integration standard that has been widely vetted and follows a pattern of practices that IT managers can handle the same way across a wide range of product integrations.

    At some point, if a software integration fails to pass student grades to the registrar or leaks personal data, a human is responsible. We’re not close to the point where we can turn over ethical or even intellectual responsibility for those challenges to a machine. If we’re not careful, generative AI will simply write spaghetti code much faster the old days.

    The social element of knowledge work

    More broadly, there are two major value components to the technical interoperability standards process. The first is obvious: technical interoperability. It’s the software. The second is where the deeper value lies. It’s in the conversation that leads to the software. I’ve participated in a 1EdTech specification working group. When the process went well, we learned from each other. Each person at that table brought a different set of experiences to an unsolved problem. In my case, the specification we were working on sent grade rosters from the SIS to the LMS and final grades back from the LMS to the SIS. It sounds simple. It isn’t. We each brought different experiences and lessons learned regarding many aspects of the problem, from how names are represented in different cultures to how SIS and LMS users think differently in ways that impact interoperability. In the short term, a standard is always a compromise. Each creator of a software system has to make adjustments that accommodate the many ways in which others thought differently when they built their own systems. But if the process works right, everybody goes home thinking a little differently about how their systems could be built better for everybody’s benefit. In the longer term, the systems we continue to build over time reflect the lessons we learn from each other.

    Generative AI could make software integration easier. But without the conversation of the standards-making process, we would lose the opportunity to learn from each other. And if AI can reduce the time and cost of the former, then maybe participants in the standards-making effort will spend more time and energy on the latter. The process would have to be rejiggered somewhat. But at least in some cases, participants wouldn’t have to wait until the standard was finalized before they started working on implementing it. When the cost of implementation is low enough and the speed is fast enough, the process can become more of an iterative hackathon. Participants can build working prototypes more quickly. They would still have to go back to their respective organizations and do the hard work of thinking through the implications, finding problems or trade-offs and, eventually, hardening the code. But at least in some cases, parts of the standards-making process could be more fluid and rapidly iterative than they have been. We could learn from each other faster.

    This same principle could apply inside any organization or partnership in which different groups are building different software components that need to work together. Actual knowledge of the code will still be important to check and improve the work of the AI in some cases and write code in others. Generative AI is not ready to replace high-quality engineers yet. But even as it improves, humans will still be needed.

    Anthopologist John Seely Brown famously traced the drop in Xerox copier repair quality to a change in its lunch schedule for their repair technicians. It turns out that technicians learn a lot from solving real problems in the field and then sharing war stories with each other. When the company changed the schedule so that technicians had less time together, repair effectiveness dropped noticeably. I don’t know if a software program was used to optimize the scheduling but one could easily imagine that being the case. Algorithms are good at concrete problems like optimizing complex schedules. On the other hand, they have no visibility into what happens at lunch or around the coffee pot. Nobody writes those stories down. They can’t be ingested and processed by a large language model. Nor can they be put together in novel ways by quirky human minds to come up with new insights.

    That’s true in the craft of copier repair and definitely true in the craft of software engineering. I can tell you from direct experience that interoperability standards-making is much the same. We couldn’t solve the seemingly simple problem of getting the SIS to talk to the LMS until we realized that registrars and academics think differently about what a “class” or a “course” is. We figured that out by talking with each other and with our customers.

    At its heart, standards-making is a social process. It’s a group of people who have been working separately on solving similar problems coming together to develop a common solution. They do this because they’ve decided that the cost/benefit ratio of working together is better than the ratio they’ve achieved when working separately. AI lowers the costs of some work. But it doesn’t yet provide an alternative to that social interaction. If anything, it potentially lowers some of the costs of collaboration by making experimentation and iteration cheaper—if and only if the standards-making participants embrace and deliberately experiment with that change.

    That’s especially true the more 1EdTech tries to have a direct role in what it refers to as “learning impact.”

    The knowledge that’s not reflected in our words

    In 2019, I was invited to give a talk at a 1EdTech summit, which I published a version of under the title “Pedagogical Intent and Designing for Inquiry.” Generative AI was nowhere on the scene at the time. But machine learning was. At the same time, long-running disappointment and disillusionment with learning analytics—analytics that actually measure students’ progress as they are learning—was palpable.

    I opened my talk by speculating about how machine learning could have helped with SIS/LMS integration, much as I speculated earlier in the post about how generative AI might help with QTI:

    Now, today, we would have a different possible way of solving that particular interoperability problem than the one we came up with over a decade ago. We could take a large data set of roster information exported from the SIS, both before and after the IT professionals massaged it for import into the LMS, and aim a machine learning algorithm at it. We then could use that algorithm as a translator. Could we solve such an interoperability problem this way? I think that we probably could. I would have been a weaker product manager had we done it that way, because I wouldn’t have gone through the learning experience that resulted from the conversations we had to develop the specification. As a general principle, I think we need to be wary of machine learning applications in which the machines are the only ones doing the learning. That said, we could have probably solved such a problem this way and might have been able to do it in a lot less time than it took for the humans to work it out.

    I will argue that today’s EdTech interoperability challenges are different. That if we want to design interoperability for the purposes of insight into the teaching and learning process, then we cannot simply use clever algorithms to magically draw insights from the data, like a dehumidifier extracting water from thin air. Because the water isn’t there to be extracted. The insights we seek will not be anywhere in the data unless we make a conscious effort to put them there through design of our applications. In order to get real teaching and learning insights, we need to understand the intent of the students. And in order to understand that, we need insight into the learning design. We need to understand pedagogical intent.

    That new need, in turn, will require new approaches in interoperability standards-making. As hard as the challenges of the last decade have been, the challenges of the next one are much harder. They will require different people at the table having different conversations.

    Pedagogical Intent and Designing for Inquiry

    The core problem is that the key element for interpreting both student progress and the effectiveness of digital learning experiences—pedagogical intent—is not encoded in most systems. No matter how big your data set is, it doesn’t help you if the data you need aren’t in it. For this reason, I argued, fancy machine learning tricks aren’t going to give us shortcuts.

    That problem is the same, and perhaps even worse in some ways, with generative AI. All ChatGPT knows is what it’s read on the internet. And while it’s made progress in specific areas at reading between the lines, the fact is that important knowledge, including knowledge about applied learning design, simply is extremely scarce in the data it can access and even in the data living in our learning systems that it can’t access.

    The point of my talk was that interoperability standards could help by supplying critical metadata—context—if only the standards makers set that as their purpose, rather than simply making sure that quiz questions end up in the right place when migrating from one LMS to another.

    I chose to open the talk by highlighting the ambiguity of language that enables us to make art. I chose this passage from Shakespeare’s final masterpiece, The Tempest:

    O wonder!
    How many goodly creatures are there here!
    How beauteous mankind is! O brave new world
    That has such people in’t!

    William Shakespeare, The Tempest

    It’s only four lines. And yet it is packed with double entendres and the ambiguity that gives actors room to make art:

    Here’s the scene: Miranda, the speaker, is a young woman who has lived her entire life on an island with nobody but her father and a strange creature who she may think of as a brother, a friend, or a pet. One day, a ship becomes grounded on the shore of the island. And out of it comes, literally, a handsome prince, followed by a collection of strange (and presumably virile) sailors. It is this sight that prompts Miranda’s exclamation.

    As with much of Shakespeare, there are multiple possible interpretations of her words, at least one of which is off-color. Miranda could be commenting on the hunka hunka manhood walking toward her.

    “How beauteous mankind is!”

    Or. She could be commenting on how her entire world has just shifted on its axis. Until that moment, she knew of only two other people in all of existence, each of who she had known her entire life and with each of whom she had a relationship that she understood so well that she took it for granted. Suddenly, there was literally a whole world of possible people and possible relationships that she had never considered before that moment.

    “O brave new world / That has such people in’t”

    So what is on Miranda’s mind when she speaks these lines? Is it lust? Wonder? Some combination of the two? Something else?

    The text alone cannot tell us. The meaning is underdetermined by the data. Only with the metadata supplied by the actor (or the reader) can we arrive at a useful interpretation. That generative ambiguity is one of the aspects of Shakespeare’s work that makes it art.

    But Miranda is a fictional character. There is no fact of the matter about what she is thinking. When we are trying to understand the mental state of a real-life human learner, then making up our own answer because the data are not dispositive is not OK. As educators, we have a moral responsibility to understand a real-life Miranda having a real-life learning experience so that we can support her on her journey.

    Pedagogical Intent and Designing for Inquiry

    Generative AI like ChatGPT can answer questions about different ways to interpret Miranda’s lines in the play because humans have written about this question and made their answers available on the internet. If you give the chatbot an unpublished piece of poetry and ask it for an interpretation, its answers are not likely to be reliably sophisticated. While larger models are getting better at reading between the lines—a topic for a future blog post—they are not remotely as good as humans are at this yet.

    Making the implicit explicit

    This limitation of language interpretation is central to the challenge of applying generative AI to learning design. ChatGPT has reignited fantasies about robot tutors in the sky. Unfortunately, we’re not giving the AI the critical information it needs to design effective learning experiences:

    The challenge that we face as educators is that learning, which happens completely inside the heads of the learners, is invisible. We can not observe it directly. Accordingly, there are no direct constructs that represent it in the data. This isn’t a data science problem. It’s an education problem. The learning that is or isn’t happening in the students’ heads is invisible even in a face-to-face classroom. And the indirect traces we see of it are often highly ambiguous. Did the student correctly solve the physics problem because she understands the forces involved? Because she memorized a formula and recognized a situation in which it should be applied? Because she guessed right? The instructor can’t know the answer to this question unless she has designed a series of assessments that can disambiguate the student’s internal mental state.

    In turn, if we want to find traces of the student’s learning (or lack thereof) in the data, we must understand the instructor’s pedagogical intent that motivates her learning design. What competency is the assessment question that the student answered incorrectly intended to assess? Is the question intended to be a formative assessment? Or summative? If it’s formative, is it a pre-test, where the instructor is trying to discover what the student knows before the lesson begins? Is it a check for understanding? A learn-by-doing exercise? Or maybe something that’s a little more complex to define because it’s embedded in a simulation? The answers to these questions can radically change the meaning we assign to a student’s incorrect answer to the assessment question. We can’t fully and confidently interpret what her answer means in terms of her learning progress without understanding the pedagogical intent of the assessment design.

    But it’s very easy to pretend that we understand what the students’ answers mean. I could have chosen any one of many Shakespeare quotes to open this section, but the one I picked happens to be the very one from which Aldous Huxley derived the title of his dystopian novel Brave New World. In that story, intent was flattened through drugs, peer pressure, and conditioning. It was reduced to a small set of possible reactions that were useful in running the machine of society. Miranda’s words appear in the book in a bitterly ironic fashion from the mouth of the character John, a “savage” who has grown up outside of societal conditioning.

    We can easily develop “analytics” that tell us whether students consistently answer assessment questions correctly. And we can pretend that “correct answer analytics” are equivalent to “learning analytics.” But they are not. If our educational technology is going to enable rich and authentic vision of learning rather than a dystopian reductivist parody of it, then our learning analytics must capture the nuances of pedagogical intent rather than flattening it.

    This is hard.

    Pedagogical Intent and Designing for Inquiry

    Consider the following example:

    A professor knows that her students tend to develop a common misconception that causes them to make practical mistakes when applying their knowledge. She very carefully crafts her course to address this misconception. She writes the content to address it. In her tests, she provides wrong answer choices—a.k.a. “distractors”—that students would choose if they had the misconception. She can tell, both individually and collectively, whether her students are getting stuck on the misconception by how often they pick the particular distractor that fits with their mistaken understanding. Then she writes feedback that the students see when they choose that particular wrong answer. She crafts it so that it doesn’t give away the correct answer but does encourage students to rethink their mistakes.

    Imagine if all this information were encoded in the software. Their hierarchy would look something like this:

    • Here is learning objective (or competency) 1
      • Here is content about learning objective 1
        • Here is assessment question A about learning objective 1.
          • Here is distractor c in assessment question A. Distractor c addresses misconception alpha.
            • Here is feedback to distractor c. It is written specifically to help students rethink misconception alpha without giving away the answer to question A. This is critical because if we simply tell the student the answer to question A then we can’t get good data about the likelihood that the student has mastered learning objective 1.

    All of that information is in the learning designer’s head and, somehow, implicitly embedded in the content in subtle details of the writing. But good luck teasing it out by just reading the textbook if you aren’t an experienced teacher of the subject yourself.

    What if these relationships were explicit in the digital text? For individual students, we could tell which ones were getting stuck on a specific misconception. For whole courses, we could identify the spots that are causing significant numbers of students to get stuck on a learning objective or competency. And if that particular sticking point causes students to be more likely to fail either that course or a later course that relies on a correct understanding of a concept, then we could help more students persist, pass, stay in school, and graduate.

    That’s how learning analytics can work if learning designers (or learning engineers) have tools that explicitly encode pedagogical intent into a machine-readable format. They can use machine learning to help them identify and smooth over tough spots where students tend to get stuck and fall behind. They can find the clues that help them identify hidden sticking points and adjust the learning experience to help students navigate those rough spots. We know this can work because, as I wrote about in 2012, Carnegie Mellon University (among others) has been refining this science and craft for decades.

    Generative AI adds an interesting twist. The challenge with all this encoding of pedagogical intent is that it’s labor-intensive. Learning designers often don’t have time to focus on the work required to identify and improve small but high-value changes because they’re too busy getting the basics done. But generative AI that creates learning experiences modeled after the pedagogical metadata in the educational content it is trained on could provide a leg up. It could substantially speed up the work of writing the first-draft content so that designers can focus on the high-value improvements that humans are still better at than machines.

    Realistically, for example, generative AI is not likely to know particular common misconceptions that block students from mastering a competency. Or how to probe for and remediate those misconceptions. But if were trained on the right models, it could generate good first-draft content through a standards-based metadata format that could be imported into a learning platform. The format would have explicit placeholders for those critical probes and hints. Human experts. supported by machine learning. could focus their time on finding and remediating these sticking points in the learning process. Their improvements would be encoded with metadata, providing the AI with better examples of what effective educational content looks like. Which would enable the AI to generate better first-draft content.

    1EdTech could help bring about such a world through standards-making. But they’d have to think about the purpose of interoperability differently, bring different people to the table, and run a different kind of process.

    O brave new world that has such skilled people in’t

    I spoke recently to the head of product development for an AI-related infrastructure company. His product could enable me to eliminate hallucinations while maintaining references and links to original source materials, both of which would be important in generating educational content. I explained a more elaborate version of the basic idea in the previous section of this post.

    “That’s a great idea,” he said. “I can think of a huge number of applications. My last job was at Google. The training was terrible.”

    Google. The company that’s promoting the heck out of their free AI classes. The one that’s going to “disrupt the college degree” with their certificate programs. The one that everybody holds up as leading the way past traditional education and toward skills-based education.

    Their training is “terrible.”

    Yes. Of course it is. Because everybody’s training is terrible. Their learning designers have the same problem I described academic learning designers as having in the previous section. Too much to develop, too little time. Only much, much worse. Because they have far fewer course design experts (if you count faculty as course design experts). Those people are the first to get cut. And EdTech in the corporate space is generally even worse than academic EdTech. Worst of all? Nobody knows what anybody knows or what anybody needs to know.

    Academia, including 1EdTech and several other standards bodies, funded by corporate foundations, are pouring incredible amounts of time, energy, and money into building a data pipeline for tracking skills. Skill taxonomies move from repositories to learning environments, where evidence of student mastery is attached to those skills in the form of badges or comprehensive learner records. Which are then sent off to repositories and wallets.

    The problem is, pipelines are supposed to connect to endpoints. They move something valuable from the place where it is found to the place where it is needed. Many valuable skills are not well documented if they are documented at all. They appear quickly and change all the time. The field of knowledge management has largely failed to capture this information in a timely and useful way after decades of trying. And “knowledge” management has tended to focus on facts, which are easier to track than skills.

    In other words, the biggest challenge that folks interested in job skills face is not an ocean of well-understood skill information that needs to be organized but rather a problem of non-consumption. There isn’t enough real-world, real-time skill information flowing into the pipeline and few people who have real uses for it on the other side. Almost nobody in any company turns to their L&D departments to solve the kinds of skills problems that help people become more productive and advance in their careers. Certainly not at scale.

    But the raw materials for solving this problem exist. A CEO for HP once famously noted knows a lot. It just doesn’t know what it knows.

    Knowledge workers do record new and important work-related information, even if it’s in the form of notes and rough documents. Increasingly, we have meeting transcripts thanks to videoconferencing and AI speech-to-text capabilities. These artifacts could be used to train a large language model on skills as they are emerging and needed. If we could dramatically lower the cost and time required to create just-in-time, just-enough skills training then the pipeline of skills taxonomies and skill tracking would become a lot more useful. And we’d learn a lot about how it needs to be designed because we’d have many more real-world applications.

    The first pipeline we need is from skill discovery to learning content production. It’s a huge one, we’ve known about it for many decades, and we’ve made very little progress on it. Groups like 1EdTech could help us to finally make progress. But they’d have to rethink the role of interoperability standards in terms of the purpose and value of data, particularly in an AI-fueled world. This, in turn, would not only help match worker skills with labor market needs more quickly and efficiently but also create a huge industry of AI-aided learning engineers.

    Summing it up

    So where does this leave us? I see a few lessons:

    • In general, lowering the cost of coding through generative AI doesn’t eliminate the need for technical interoperability standards groups like 1EdTech. But it could narrow the value proposition for their work as currently applied in the market.
    • Software engineers, learning designers, and other skilled humans have important skills and tacit knowledge that don’t show up in text. It can’t be hoovered up by a generative AI that swallows the internet. Therefore, these skilled individuals will still be needed for some time to come.
    • We often gain access to tacit knowledge and valuable skills when skilled individuals talk to each other. The value of collaborative work, including standards work, is still high in a world of generative AI.
    • We can capture some of that tacit knowledge and those skills in machine-readable format if we set that as a goal. While doing so is not likely to lead to machines replacing humans in the near future (at least in the areas I’ve described in this post), it could lead to software that helps humans get more work done and spend more of their time working on hard problems that quirky, social human brains are good at solving.
    • 1EdTech and its constituents have more to gain than to lose by embracing generative AI thoughtfully. While I won’t draw any grand generalizations from this, I invite you to apply the thought process of this blog post to your own worlds and see what you discover.

    Source link

  • More Than Half of College and University Employees Say They Are Likely to Look for Other Employment in the Near Future – CUPA-HR

    More Than Half of College and University Employees Say They Are Likely to Look for Other Employment in the Near Future – CUPA-HR

    by CUPA-HR | July 21, 2022

    New research from CUPA-HR shows that higher education institutions are in the midst of a talent crisis, as many staff, professionals and administrators are considering other employment opportunities due to dissatisfaction with their pay, their opportunities for advancement, their institutions’ remote and flex work policies, and more.

    The newly published research report, The CUPA-HR 2022 Higher Education Employee Retention Survey: Initial Results, provides an overview of what proportion of the higher ed workforce is at risk for leaving, why they’re considering leaving employment, and with which policies, work arrangements and benefits employees are satisfied or dissatisfied. The report includes several recommendations for addressing these issues.

    Data from 3,815 higher ed employees across 949 institutions and representing 15 departments/functional areas were analyzed for this report.

    Findings

    Higher ed employees are looking for other jobs, mostly because they desire a pay increase. More than half (57%) of the higher ed workforce is at least somewhat likely to look for other employment opportunities in the next 12 months. The most common reason for seeking other employment (provided by three-fourths of those likely to look for another job) is an increase in pay. Other reasons are that they desire more remote work opportunities, a more flexible schedule, and a promotion or more responsibility.

    Higher ed institutions are not providing the remote work opportunities that employees want. Nearly three-fourths (71%) of employees report that most of their duties can be performed remotely, and 69% would prefer to have at least at least a partially remote work arrangement, yet 63% are working mostly or completely on-site.

    Higher ed employees are working longer and harder than ever. Two-thirds (67%) of full-time staff typically work more hours each week than what is considered full-time. Nearly two-thirds (63%) have taken on additional responsibilities of other staff who have recently left, and nearly three-fourths (73%) have taken on additional responsibilities as a direct result of the pandemic.

    Higher ed employees have clear areas of satisfaction and dissatisfaction. Areas of satisfaction include benefits, relationship with supervisor, job duties, and feeling a sense of belonging. Areas of dissatisfaction include investment in career development, opportunities for advancement, fair pay, remote work policies and parental leave.

    Read the full report.



    Source link

  • UBC Future Forward – GlobalHigherEd

    UBC Future Forward – GlobalHigherEd

    This entry is also available in Insider Higher Ed.

    ~~~~~~

    As I outlined back on 9 August 2015 in Inside Higher Ed, the unexpected leadership transition at the University of British Columbia (UBC) in summer 2015 had all the ingredients to become a major crisis. And a ‘barn-burner’ of a crisis has certainly emerged, sad to say. As a concerned alum, I do hope my alma mater can move forward. From my perspective, nearly seven months later (amid a possible vote of non-confidence in the Board of Governors and an ongoing presidential search) it’s worth flagging two key problems, and then three correctional action suggestions.

    On Problems

    First, if mistakes were made in the handling of the processes in which Professor Gupta was hired, institutionally supported in his first year in the job, and/or resigned, they need to be analyzed and openly communicated. No institution nor key leader is perfect – that’s life. World-class universities sing their praises and own their mistakes. Moving on is more difficult if a consistently defensive posture is adopted by key stakeholders with power, and if important mistakes are not publicly owned. Fortunately, process factors are not typically entangled up in non-disclosure agreements (NDAs).

    Second, UBC, including the Board of Governors, needs to publicly commit to becoming a more transparent organization as this seems to be a core theme of various conflicts. If and when the principle of enhanced transparency is committed to, detailed changes need to be devised and outlined in a systematic way. Discourse about transparency is not enough – a strategic plan with deliverables and deadlines is needed. For example, many boards of governors (or equivalent) live stream and then archive all regularly scheduled meetings. Live streaming and archiving important committee meetings is also possible. Err on the side of transparency. And in doing so, use transparency, as many of the world’s best universities do, as a mechanism to enhance engagement with key stakeholders within the organization. Why? Because engagement in a shared governance context improves information flows in all directions, as well as the quality of decisions and associated outcomes. Finally, if a major crisis were to emerge in the near future, take into account this higher education crisis expert’s view: “Our first line for every client is, “Tell the truth, tell it all, tell it first.“”

    On Correctional Actions

    I’ll preface my three correctional action suggestions with a statement that UBC is a very fortunate university – it’s a high quality and respected institution with relatively stable financial footings. And we’re also fortunate that a respected leader like president emeritus Martha Piper is acting as Interim President and Vice-Chancellor. This interim role is critically important to moving forward. Only experienced and broadly trusted interim presidents can play the unique university-wide role of helping to repair broken communications and creating psychic healing measures. This is a nebulous but vital role for any president to focus on, from start to finish, amid a governance/leadership crisis.

    In terms of correctional actions, it’s first worth noting that many universities and higher education systems are revisiting their governance structures. A formal independent governance review is worth considering. And at a minimum, it’s worth commissioning one or more independent studies of UBC’s governance in comparative perspective, with attention to higher education structures and systems, variations in autonomy and transparency, and the changing context for provincial/higher education relationships. A condition of ‘legacy governance’ exists right now in that our governance systems and procedures reflect earlier eras of revenue streams, very different political and technological contexts, and now dated understandings of the roles of universities in the development of economy and society. Students, in particular, are underrepresented in governance systems vis a vis their majority (in many contexts) role in providing the revenue streams that sustain universities. It’s also worth noting that crises in several of UBC’s peer universities have been associated with a lack of awareness, at the governing board level, regarding how shared governance works, including what roles various formal and informal governance bodies play, as well as how these governance systems interconnect. Conversely, many faculty, staff and students associated with many shared governance bodies do not understand what roles boards of governors (or equivalent) are required to play. In short, an open and transparent examination of governance structures and practices could help, if done well, enhance levels of knowledge while reducing mistrust and erroneous assumptions.

    Second, and as noted here, unexpected leadership transitions generate enormous attention to the cultural, economic, and political forces reshaping universities, as well as associated lines of power that bring these forces to life. A crisis is a wonderful teaching and learning moment. But do this in a systematic way! For example, launch a UBC Futures seminar series; provide modest funding to spur on some unique courses and workshops on related issues in the 2016-17 academic year; work with the BC Open Textbook Project or UBC Press to develop a ‘living’ open text on the tumultuous times UBC has been going through so everyone can learn, down the line, what went well, and what did not; enable ethnographic research by social scientists in key governance bodies; etc. There is so much more that could be done to turn all the intellectual power at UBC in on itself so as to learn in a systematic vs. haphazard way. In short, grasp the moment and identity rigorous and intellectually stimulating mechanisms (though not associated with decision-making) to generate sustained and valuable learning-oriented experiences.

    Third, take the medium-term view regarding the ongoing presidential search and shape the search process to rebuild community. Some universities can hire in one year after an unexpected leadership transition, while others take 2-3 years of ‘bridging’ leadership. The University of Wisconsin-Madison, for example, dealt with an unexpected leadership crisis in 2011 by bringing back David Ward, our former chancellor (president equivalent) and president emeritus of the American Council of Education (which has 1,700+ member institutions). Interim Chancellor Ward did a wonderful job from 2011-2013 in what he wittily defined as his “Chancellor Encore” role. Ward helped to repair broken communications, including via framing and bringing to life psychic healing measures (e.g., new modes of vertical and lateral communications in and outside of UW-Madison). His sturdy two-year leadership bridge led to the successful 2013 hiring of our current chancellor (Rebecca Blank, President Obama’s acting secretary of the U.S. Department of Commerce).

    As Martha Piper aptly put it in her statement about UBC’s Centennial celebrations:

    Looking forward, our future is unwritten. What we learn, discover, and contribute together will depend on the strength of our connections – to all of our communities, local and global.

    On this note, it’s critically important to ensure that the presidential search process helps build connections in the UBC community, while creating a positive pathway to move forward. If presidential search process troubles and tensions exist, recognize them openly and do something about them. For example, put the presidential search process on pause for one week (or even a long weekend), coordinate a facilitated off-campus retreat with an objective and independent governance/leadership expert, and dig in to honestly explore causes and realistic solutions on a face-to-face basis. Evidence elsewhere points to the fact that all university presidential searches distill and condense what is working well with respect to governance, what’s functioning in an adequate manner, and what is problematic. Presidential searches are lenses into the heart of the governance of the university, inevitably exposing both buried and surface tensions, and uneven power geometries. In short, there are opportunities and risks associated with all presidential searches. Given this it’s important to always take the medium-term view. UBC is a wonderful university, and it will have many options: take time to make the right presidential search process choices, and in so doing strengthen the entire community (alum included!).

    Kris Olds

    Photos courtesy of @TrishJewison, Eye in the Sky Traffic Reporter for @GlobalBC and @AM730Traffic.

    UBCWest1



    Source link