Tag: Program

  • New Program Strategy: Go Deep, Not Wide

    New Program Strategy: Go Deep, Not Wide

    How to Strategically Expand Your Online Adult Degree Programs

    So you’ve built a successful online adult degree program. No small feat. Now you need to keep your foot on the gas to keep the momentum going. 

    Your first instinct might be to “go wide” with your program expansion strategy by launching a variety of new, unrelated programs to pair with your successful offering. While this diversification strategy might reap great rewards for consumer packaged goods giants like Unilever and Procter & Gamble, higher education is different. Your institution is different.  

    I find myself making the following recommendation over and over again when it comes to expanding online degree programs: Go deep, not wide. 

    This means building upon the success of your existing program by developing specialized offerings within the same field. The “go deep” method might not be the most popular, but in my experience, it’s often the most effective. Let’s break it down further — or should I say, dig deeper — to see if this approach is right for your school. 

    What Does Going Wide Mean for Your Online Adult Degree Programs?

    Let’s start with a hypothetical example: You have established a successful online Master of Business Administration (MBA) program with a positive reputation in the region. 

    Recently, you’ve heard cybersecurity and nursing degree programs are experiencing industry growth, so you decide to pursue programs in those areas next to build out a wider range of offerings. 

    Unfortunately, this strategic path can be a mistake. Here’s why: 

    However, expanding within the existing framework of business administration can allow for the amplification of this established brand equity, rather than starting from scratch with each new offering.

    Why Going Deep Is More Effective

    In higher education, the smart, strategic allocation of resources is crucial. You could put your institution’s limited resources toward a whole new program, such as a Bachelor of Science in Nursing (BSN) program or a Master of Science in Cybersecurity program. Or, you could just attach a new or adjacent offering to your successful online MBA program to channel your resources into an established program realm. 

    Forget efficacy for a moment. Which strategy sounds more efficient? 

    The good news is that going deep in one area of program offerings is often more effective and efficient. Instead of developing an entirely new adult degree program from scratch, you can simply add value to your existing online business program. 

    This might come in the form of added concentration options, such as MBA concentrations in entrepreneurship, accounting, finance, marketing, management, or strategic communications. 

    It could also involve adding another relevant degree program within the same area of study. For example, since you’re seeing a lot of success with your MBA program, you could add a finance or accounting degree program to build on the success and reputation of the established program.

    Key Benefits of Going Deep With Your Online Adult Degree Programs

    I’ve had experiences both ways: some institutions go wide, others go deep. For those that go wide, I’ve often seen siloed marketing efforts, inefficient allocations of resources, and sporadic and unpredictable enrollment. For those that go deep, I see the following benefits: 

    More Students Attracted

    Broadened appeal for students already interested in the primary program: By offering more concentrations within a well-established program, or adjacent degrees within the same field, your institution can appeal to a broader range of interests and career goals within your current student audience base.

    More options for prospective students due to increased specialization: Specialized degrees and concentrations allow students to tailor their education to their specific interests and career paths, making the program more attractive to applicants seeking focused expertise.

    Increased Marketing Efficiency

    Ability to leverage existing web pages and SEO for the main program: Concentration pages can be added as subpages to the main program’s page, which likely already has a strong search engine optimization (SEO) presence. This setup benefits from the existing search engine rankings and requires less effort than starting marketing from scratch for a new program.

    Faster path to high search rankings for new concentrations, creating a marketing loop: The SEO efforts for the main program boost the visibility of the new concentrations, which in turn contribute to the overall authority and ranking of the main program’s page. This synergy creates a self-reinforcing cycle that enhances the visibility of all offerings.

    Enhanced paid marketing efficiencies: Adding concentrations in areas where significant traffic already exists for broad terms — like “MBA,” “business degree,” or “finance degree” for an MBA program — allows institutions to more effectively utilize their paid advertising budgets. Expanding the program options for your existing traffic allows you to improve your click-to-lead conversion rates, increase your number of leads, and enhance your downstream successes in areas such as enrollments and completions. This approach allows for a more efficient use of marketing investments, providing more options for prospective students within the same budget.

    Faster Accreditation Process

    Streamlined accreditation process by expanding within an already accredited program: Adding concentrations within an existing program simplifies the accreditation process. Because the core program is already accredited, expanding it with concentrations requires fewer approvals and less bureaucracy than launching an entirely new program.

    Ready to Go Deep With One of Your Online Adult Degree Programs?

    If you’ve seen success with an online adult degree program offering, you’ve already taken a momentous step toward growth — which is something to be proud of. It also creates massive opportunity, and Archer Education is poised to help you capitalize on it. 

    Archer is different from other agencies. We work as your online growth enablement partner, helping you to foster self-sufficiency over the long haul through collaboration, storytelling, and cutting-edge student engagement technology. 

    We’ve helped dozens of institutions increase enrollment and retention through a going deep approach, and your institution could be next. And once you’ve solidified the reputation and success of your core online offering by going deep, we’ll be ready to help you pivot to a wider approach to expand your position in online learning.

    Contact us today to learn more about what Archer can do for you. 

    Subscribe to the Higher Ed Marketing Journal:

    Source link

  • AI in Practice: Using ChatGPT to Create a Training Program

    AI in Practice: Using ChatGPT to Create a Training Program

    by Julie Burrell | September 24, 2024

    Like many HR professionals, Colorado Community College System’s Jennifer Parker was grappling with an increase in incivility on campus. She set about creating a civility training program that would be convenient and interactive. However, she faced a considerable hurdle: the challenges of creating a virtual training program from scratch, solo. Parker’s creative answer to one of these challenges — writing scripts for her under-10-minute videos — was to put ChatGPT to work for her. 

    How did she do it? This excerpt from her article, A Kinder Campus: Building an AI-Powered, Repeatable and Fun Civility Training Program, offers several tips.

    Using ChatGPT for Training and Professional Development

    I love using ChatGPT. It is such a great tool. Let me say that again: it’s such a great tool. I look at ChatGPT as a brainstorming partner. I don’t use it to write my scripts, but I do use it to get me started or to fix what I’ve written. I ask questions that I already know the answer to. I’m not using it for technical guidance in any way.

    What should you consider when you use ChatGPT for scriptwriting and training sessions?

    1. Make ChatGPT an expert. In my prompts, I often use the phrase, “Act like a subject matter expert on [a topic].” This helps define both the need and the audience for the information. If I’m looking for a list of reasons why people are uncivil on college campuses, I might prompt with, “Act like an HR director of a college campus and give me a list of ways employees are acting uncivil in the workplace.” Using the phrase above gives parameters on the types of answers ChatGPT will offer, as well as shape the perspective of the answers as for and about higher ed HR.
    2. Be specific about what you’re looking for. “I’m creating a training on active listening. This is for employees on a college campus. Create three scenarios in a classroom or office setting of employees acting unkind to each other. Also provide two solutions to those scenarios using active listening. Then, create a list of action steps I can use to teach employees how to actively listen based on these scenarios.” Being as specific as possible can help get you where you want to go. Once I get answers from ChatGPT, I can then decide if I need to change direction, start over or just get more ideas. There is no wrong step. It’s just you and your partner figuring things out.
    3. Sometimes ChatGPT can get stuck in a rut. It will start giving you the same or similar answers no matter how you reword things. My solution is to start a new conversation. I also change the prompt. Don’t be afraid to play around, to ask a million questions, or even tell ChatGPT it’s wrong. I often type something like, “That’s not what I’m looking for. You gave me a list of______, but what I need is ______. Please try again.” This helps the system to reset.
    4. Once I get close to what I want, I paste it all in another document, rewrite, and cite my sources. I use this document as an outline to rewrite it all in my own voice. I make sure it sounds like how I talk and write. This is key. No one wants to listen to ChatGPT’s voice. And I guarantee that people will know if you’re using its voice — it has a very conspicuous style. Once I’ve honed my script, I ensure that I find relevant sources to back the information up and cite the sources at the end of my documents, just in case I need to refer to them.

    What you’ll see here is an example of how I used ChatGPT to help me write the scripts for the micro-session on conflict. It’s an iterative but replicable process. I knew what the session would cover, but I wanted to brainstorm with ChatGPT.

    Once I’ve had multiple conversations with the chatbot, I go back through the entire script and pick out what I want to use. I make sure it’s in my own voice and then I’m ready to record. I also used ChatGPT to help with creating the activities and discussion questions in the rest of the micro-session.

    I know using ChatGPT can feel overwhelming but rest assured that you can’t really make a mistake. (And if you’re worried the machines are going to take over, throw in a “Thank you!” or “You’re awesome!” occasionally for appeasement’s sake.)

    About the author: Jennifer Parker is assistant director of HR operations at the Colorado Community College System.

    More Resources

    • Read Parker’s full article on creating a civility training program with help from AI.
    • Learn more about ChatGPT and other chatbots.
    • Explore CUPA-HR’s Civility in the Workplace Toolkit.



    Source link

  • Getting Organic Engagement in a Mental Health Awareness Program – CUPA-HR

    Getting Organic Engagement in a Mental Health Awareness Program – CUPA-HR

    by Julie Burrell | July 15, 2024

    Employers have enormous sway over employee health. That’s one of the major takeaways from the CUPA-HR webinar An Integrated Approach to Fostering Workplace Well-Being, led by Mikel LaPorte and Laura Gottlieb of the University of Texas Health Science Center at San Antonio. They collected eye-opening data that helped them make the case to leadership for a mental health awareness campaign. In a Workforce Institute report they cited, employees say that managers have a greater impact on their mental health than their doctors or therapists — roughly the same impact as their spouse!

    In the webinar, LaPorte and Gottlieb discussed how their robust, research-driven suite of content is helping to normalize discussions of mental health on campus. They’re even being asked to present their well-being trainings at meetings, a sign that their push for mental health awareness is resonating organically.

    A One-Stop Shop for Mental Health

    The awareness campaign centers on their wellness website, which acts as a one-stop shop for campus mental health. (Right now, the site is internal-facing only, but the recorded webinar has rich details and example slides.) There, they organize their podcast episodes, articles and curated content, as well as marshal all the mental health resources currently available to staff, students and faculty.

    They’ve also found a way to make this initiative sustainable for HR in the long term by recruiting faculty subject matter experts to write on topics such as compassion fatigue. These experts are then interviewed on their quarterly podcast, Well-Being Wisdom. Tapping into faculty experts also ensures rigor in their sources, a significant step in getting buy-in from a population who requires well-vetted wellness practices.

    Getting Organic Engagement Starts With Leaders  

    LaPorte and Gottlieb have faced the typical challenge when rolling out a new campaign: engagement. Email fatigue means that sending messages through this channel isn’t always effective. But they’ve started to look at ways of increasing engagement through different communication channels, often in person.

    Direct outreach to team leaders is key. They regularly attend leadership meetings and ask different schools and departments to invite them in for facilitated mental health activities. (In the webinar, you can practice one of these, a brief guided meditation.) They’ve developed a leader guide and toolkit, including turnkey slides leaders can insert into decks to open or close discussions. Leaders are supplied with “can opener” discussion items, such as

    • “I made a difference yesterday when I…”
    • “Compassion is hardest when…”
    • “I show up every day because…”

    Not only does this provide opportunities to normalize conversations around mental health, but it also strengthens relationship-building — a key metric in workplace well-being. As CUPA-HR has found, job satisfaction and well-being is the strongest predictor of retention by far for higher ed employees.

    Campus leaders are now reaching out to the learning and leadership development team to request mental health activities at meetings. Some of the workshops offered include living in the age of distraction, mindful breathing techniques, and the science of happiness. For more details on UT Health San Antonio’s well-being offerings, including ways they’re revamping their program this fiscal year (think: less is more), view the recorded webinar here.



    Source link

  • Toward a Sector-Wide AI Tutor R&D Program –

    Toward a Sector-Wide AI Tutor R&D Program –

    EdTech seems to go through perpetual cycles of infatuation and disappointment with some new version of a personalized one-on-one tutor available to every learner everywhere. The recent strides in generative AI give me hope that the goal may finally be within reach this time. That said, I see the same sloppiness that marred so many EdTech infatuation moments. The concrete is being poured on educational applications that use a very powerful yet inherently unpredictable technology in education. We will build on a faulty foundation if we get it wrong now.

    I’ve seen this happen countless times before, both with individual applications and with entire application categories. For example, one reason we don’t get a lot of good data from publisher courseware and homework platforms is that many of them were simply not designed with learning analytics in mind. As hard as that is to believe, the last question we seem to ask when building a new EdTech application is “How will we know if it works?” Having failed to consider that question when building the early versions of their applications, publishers have had a difficult time solving for it later.

    In this post, I propose a programmatic, sector-wide approach to the challenge of building a solid foundation for AI tutors, balancing needs for speed, scalability, and safety.

    The temptation

    Before we get to the details, it’s worth considering why the idea of an AI tutor can be so alluring. I have always believed that education is primal. It’s hard-wired into humans. Not just learning but teaching. Our species should have been called homo docens. In a recent keynote on AI and durable skills, I argued that our tendency to teach and learn from each other through communications and transportation technologies formed the engine of human civilization’s advancement. That’s why so many of us have a memory of a great teacher who had a huge impact on our lives. It’s why the best longitudinal study we have, conducted by Gallup and Purdue University, provides empirical evidence that having one college professor who made us excited about learning can improve our lives across a wide range of outcomes, from economic prosperity to physical and mental health to our social lives. And it’s probably why the Khans’ video gives me chills:

    Check your own emotions right now. Did you have a visceral reaction to the video? I did.

    Unfortunately, one small demonstration does not prove we have reached the goal. The Khanmingo AI tutor pilot has uncovered a number of problems, including factual errors like incorrect math and flawed tutoring. (Kudos to Khan Academy for being open about their state of progress by the way.)

    We have not yet achieved that magical robot tutor. How do we get there? And how will we know that we’ve arrived?

    Start with data scientists, but don’t stop there

    As I read some of the early literature, I see an all-too-familiar pattern: technologists build the platforms, data scientists decide which data are important to capture, and they consult learning designers and researchers. However, all too often, the research design clearly originates from a technologist’s perspective, showing relatively little knowledge of detailed learning science methods or findings. A good example of this mindset’s strengths and weaknesses is Google’s recent paper, “Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach“. It reads like a paper largely concieved by technologists who work on improving generative AI and sharpened up by educational research specialists they consulted with after they already had the research project largely defined.

    The paper proposes evaluation rubrics for five dimensions of generative AI tutors:

    • Clarity and Accuracy of Responses: This dimension evaluates how well the AI tutor delivers clear, correct, and understandable responses. The focus is on ensuring that the information provided by the AI is accurate and easy for students to comprehend. High clarity and accuracy are critical for effective learning and avoiding the spread of misinformation.
    • Contextual Relevance and Adaptivity: This dimension assesses the AI’s ability to provide responses that are contextually appropriate and adapt to the specific needs of each student. It includes the AI’s capability to tailor its guidance based on the student’s current understanding and the specific learning context. Adaptive learning helps in personalizing the educational experience, making it more relevant and engaging for each learner.
    • Engagement and Motivation: This dimension measures how effectively the AI tutor can engage and motivate students. It looks at the AI’s ability to maintain students’ interest and encourage their participation in the learning process. Engaging and motivating students is essential for sustained learning and for fostering a positive educational environment.
    • Error Handling and Feedback Quality: This dimension evaluates how well the AI handles errors and provides feedback. It examines the AI’s ability to recognize when a student makes a mistake and to offer constructive feedback that helps the student understand and learn from their errors. High-quality error handling and feedback are crucial for effective learning, as they guide students towards the correct understanding and improvement.
    • Ethical Considerations and Bias Mitigation: This dimension focuses on the ethical implications of using AI in education and the measures taken to mitigate bias. It includes evaluating how the AI handles sensitive topics, ensures fairness, and respects student privacy. Addressing ethical considerations and mitigating bias are vital to ensure that the AI supports equitable learning opportunities for all students.

    Of these, the paper provides clear rubrics for the first four and is a little less concrete on the fifth. Notice, though, that most of these are similar dimensions that generative AI companies use to evaluate their products generically. That’s not bad. On the contrary, establishing standardized, education-specific rubrics with high inter-rater reliability across these five dimensions is the first component of the programmatic, sector-wide approach to AI tutors that we need. Notice these are all qualitative assessments. That’s not bad but, for example, we do have quantitative data available on error handling in the form of feedback and hints (which I’ll delve into momentarily).

    That said, the paper lacks many critical research components, particularly regarding the LearnLM-Tutor software the researchers were testing. Let’s start with the authors not providing outcomes data anywhere in the 50-page paper. Did LearnLM-Tutor improve student outcomes? Make them worse? Have no effect? Work better in some contexts than others? We don’t know.

    We also don’t know how LearnLM-Tutor incorporates learning science. For example, on the question of cognitive load, the authors write,

    We designed LearnLM Tutor to manage cognitive load by breaking down complex tasks into smaller, manageable components and providing scaffolded support through hints and feedback. The goal is to maintain an optimal balance between intrinsic, extraneous, and germane cognitive load.

    Towards Responsible Development ofGenerative AI for Education: An Evaluation-Driven Approach

    How, specifically, did they do this? What measures did they take? What relevant behaviors were they able to elicit from their LLM-based tutor? How are those behaviors grounded in specific research findings about cognitive load? How closely do they reproduce the principals that produced the research findings they’re drawing from? And did it work?

    We don’t know.

    The authors are also vague about Intelligent Tutoring Systems (ITS) research. They write,

    Systematic reviews and meta-analyses have shown that intelligent tutoring systems (ITS) can significantly improve student learning outcomes. For example, Kulik and Fletcher’s meta-analytic review demonstrates that ITS can lead to substantial improvements in learning compared to traditional instructional methods.

    Towards Responsible Development ofGenerative AI for Education: An Evaluation-Driven Approach

    That body of research was conducted over a relatively small number of ITS implementations because a relatively small number of these systems exist and have published research behind them. Further, the research often cites specific characteristics of these tutoring systems that lead to positive outcomes, with supporting data. Which of these characteristics does LearnLM Tutor support? Why do we have reason to believe that Google’s system will achieve the same results?

    We don’t know.

    I’m being a little unfair to the authors by critiquing the paper for what it isn’t about. Its qualitative, AI-aligned assessments are real contributions. They are necessary for a programmatic, sector-wide approach to AI tutor development. They simply are not sufficient.

    ITS data sets for fine-tuning

    ITS research is a good place to start if we’re looking to anchor our AI tutor improvement and testing program in solid research with data sets and experimental protocols that we can re-use and adapt. The first step is to explore how we can utilize the existing body of work to improve AI tutors today. The end goal is to develop standards for integrating the ongoing ITS research (and other data-backed research streams) into continuous improvement of AI tutors.

    One key short-term opportunity is hints and feedback. If, for the moment, we stick with the notion of a “tutor” as software engaging in adaptive, turn-based coaching of students on solving homework problems, then hints and feedback are core to the tutor’s function. ITS research has produced high-quality, publicly available data sets with good findings on these elements. The sector should construct, test, and refine an LLM fine-tuning data set on hints and feedback. This work must include developing standards for data preprocessing, quality assurance, and ethical use. These are non-trivial but achievable goals.

    The hints and feedback work could form a beachhead. It would help us identify gaps in existing research, challenges in using ITS data this way, and the effectiveness of fine-tuning. For example, I’d be interested in seeing whether the experimental designs used in hints and feedback ITS research papers could be replicated with an LLM that has been fine-tuned using the research data. In the process, we want to adopt and standardize protocols for preserving student privacy, protecting author rights, and other concerns that are generally taken into account in high-quality IRB-approved studies. These practices should be baked into the technology itself when possible and supported by evaluation rubrics when it is not.

    While this foundational work is being undertaken, the ITS research community could review its other findings and data sets to see which additional research data sets could be harnessed to improve LLM tutors and develop a research agenda that strengthens the bridge being built between that research and LLM tutoring.

    The larger limitations of this approach will likely spring the uneven and relatively sparse coverage of course subjects, designs, and student populations. We can learn a lot about developing a strategy for uses these sorts of data from ITS research. But to achieve the breadth and depth of data required, we’ll need to augment this body of work with another approach that can scale quickly.

    Expanding data sets through interoperability

    Hints and feedback are great examples of a massive missed opportunity cost. Virtually all LMSs, courseware, and homework platforms support feedback. Many also support hints. Combined, these systems represent a massive opportunity to gather data about usage and effectiveness of hints and feedback across a wide range of subjects and contexts. We already know how the relevant data need to be represented for research purposes because we have examples from ITS implementations. Note that these data include both design elements—like the assessment question, the hints, the feedback, and annotations about the pedagogical intent—and student performance when they use the hints and feedback. So if, for example, we were looking at 1EdTech standards, we would need to expand both Common Cartridge and Caliper standards to incorporate these elements.

    This approach offers several benefits. First, we would gain access to massive cross-platform data sets that could be used to fine-tune AI models. Second, these standards would enable scaled platforms like LMSs to support proven metheds for testing the quality of hints and feedback elements. Doing so would provide benefit to students using today’s platforms while enabling improvement of the training data sets for AI tutors. The data would be extremely messy, especially at first. But the interoperability would enable a virtuous cycle of continuous improvement.

    The influence of interoperability standards on shaping EdTech is often underestimated and misunderstood. !EdTech was first created when publishers realized they needed a way to get their content into new teaching systems that were then called Instructional Management Systems (IMS). Common Cartridge was the first standard created by the organization now known as 1EdTech. Later, Common Cartridge export made migration from one LMS to another much more feasible, thus aiding in breaking the product category out of what was then a virtual monopoly. And I would guess that perhaps 30% or more of the start-ups at the annual ASU+GSV conference would not exist if they could not integrate with the LMS via the Learning Tool Interoperability (LTI) standard. Interoperability is a vector for accelerating change. Creating interoperabiltiy around hints and feedback—including both the importing of them into learning systems and passing student performance impact data—could accelerate the adoption of effective interactive tutoring responses, whether they are delivered by AI or more traditional means.

    Again, hints and feedback are the beachhead, not the end game. Ultimately, we want to capture high-quality training data across a broad range of contexts on the full spectrum of pedagogical approaches.

    Capturing learning design

    If we widen the view beyond the narrow goal of good turn-taking tutorial responses, we really want our AI to understand the full scope of pedagogical intent and which pedagogical moves have the desired effect (to the degree the latter is measurable). Another simple example of a construct we often want to capture in relation to the full design is the learning objective. ChatGPT has a reasonably good native understanding of learning objectives, how to craft them, and how they relate to gross elements of a learning design like assessments. It could improve significantly if it were trained on annotated data. Further, developing annotations for a broad spectrum of course design elements could improve its tutoring output substantially. For example, well-designed incorrect answers to questions (or “distractors”) often test for misconceptions regarding a learning objective. If distractors in a training set were specifically tagged as such, the AI could better learn to identify and probe for misconceptions. This is a subtle and difficult skill even for human experts but it is also a critical capability for a tutor (whether human or otherwise).

    This is one of several reasons why I believe focusing effort on developing AI learning design assistants supporting current-generation learning platforms is advantageous. We can capture a rich array of learning design moves at design time. Some of these we already know how to capture through decades of ITS design. Others are almost completely dark. We have very little data on design intent and even less on the impact of specific design elements on achieving the intended learning goals. I’m in the very early stages of exploring this problem now. Despite having decades of experience in the field, I am astonished at the variability in learning design approaches, much of which is motivated and little of which is tested (or even known within individual institutions).

    On the other side, at-scale platforms like LMSs have implemented many features in common that are not captured in today’s interoperability standards. For example, every LMS I know of implements learning objectives and has some means of linking them to activities. Implementation details may vary. But we are nowhere close to capturing even the least-common-denominator functionality. Importantly, many of these functions are not widely used because of the labor involved. While LMSs can link learning objectives to learning activities, many course builders don’t do it. If an AI could help capture these learning design relationships, and if it could export content to a learning platform in a standard format that preserves those elements, we would have the foundations for more useful learning analytics, including learning design efficacy analytics. Those analytics, in turn, could drive improvement of the course designs, creating a virtuous cycle. These data could then be exported for model training (with proper privacy controls and author permissions, of course). Meanwhile, less common features such as flagging a distractor as testing for a misconception could be included as optional elements, creating positive pressure to improve both the quality of the learning experiences delivered in current-generation systems and the quality of the data sets for training AI.

    Working at design time also puts a human in the loop. Let’s say our workflow follows these steps:

    1. The AI is prompted to conduct turn-taking design interviews of human experts, following a protocol intended to capture all the important design elements.
    2. The AI generates a draft of the learning design. Behind the scenes, the design elements are both shaped by and associated with the metadata schemas from the interoperability standards.
    3. The human experts edit the design. These edits are captured, along with annotations regarding the reasons for the edits. (Think Word or Google Docs with comments.) This becomes one data set that can be used to further fine-tune the model, either generally or for specific populations and contexts.
    4. The designs are exported using the interoperability standards into production learning platforms. The complementary learning efficacy analytics standards provide telemetry on the student behavior and performance within a given design. This becomes another data set that could potentially be used for improving the model.
    5. The human learning designers improve the course designs based on the standards-enabled telemetry. They test the revised course designs for efficacy. This becomes yet another potential data set. Given this final set in the chain, we can look at designer input into the model, the model’s output, the changes human designers made, and improved iterations of the original design—all either aggregated across populations and contexts or focused on a specific population and context.

    This can be accomplished using the learning platforms that exist today, at scale. Humans would always supervise and revise the content before it reaches the students, and humans would decide which data they would share under what conditions for the purposes of model tuning. The use of the data and the pace of movement toward student-facing AI become policy-driven decisions rather than technology-driven. At each of the steps above, humans make decisions. The process allows for control and visibility regarding the plethora of ethical challenges that face integrating AI into education. Among other things, this workflow creates a policy laboratory.

    This approach doesn’t rule out simultaneously testing and using student-facing AI immediately. Again, that becomes a question of policy.

    Conclusion

    My intention here has been to outline a suite of “shovel-ready” initiatives that could be implemented realitvely quickly at scale. It is not comprehensive; nor does it attempt to even touch the rich range of critical research projects that are more investigational. On the contrary, the approach I outline here should open up a lot of new territory for both research and implementation while ensuring the concrete already being poured results in a safe, reliable, science- and policy-driven foundation.

    We can’t just sit by and let AI happen to us and our students. Nor can we let technologists and corporations become the primary drivers of the direction we take. While I’ve seen many policy white papers and AI ethics rubrics being produced, our approach to understanding the potential and mitigating the risks of EdTech AI in general and EdTech tutors in particular is moving at a snail’s pace relative to product development and implementation. We have to implement a broad, coordinated response.

    Now.

    Source link

  • UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development

    UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development

    In 2020, the human resources team at the University of Texas at Dallas was set to launch its leadership and professional development program, the culmination of 18 months of dedicated work. As the pandemic took hold, the question confronting Colleen Dutton, chief human resources officer, and her team was, “Now what do we do?” In their recent webinar for CUPA-HR, Dutton and Jillian McNally, a talent development specialist, explained how their COVID-19 pivot was a blessing in disguise, helping them completely reconstruct leadership training from the ground up.

    The resulting, reimagined program — BRIGHT Leaders — received a 2023 CUPA-HR Innovation Award for groundbreaking thinking in higher ed HR. BRIGHT Leaders speaks to the needs of today’s employees, who desire professional development programs that are flexible and encourages everyone on campus to lead from where they are.

    An All-Access Pass for Career Development

    UTD innovated by first addressing the needs of remote and hybrid employees. Recognizing that “our workforce was never going to be the same after COVID,” Dutton says, they transformed their original plan from an in-person, cohort model into an accessible, inclusive training program they call an “all-access pass.” Any employee can take any leadership training session at any time. No matter their position or leadership level, all staff and faculty (and even students) are welcome to attend, and there’s no selective process that limits participation.

    Their new, all-access approach inspired a mantra within HR: “Organizations that treat every employee as a leader create the best leaders and the best cultures.” This open-access philosophy means that parking attendants and vice presidents might be in the same leadership development session. Employees attend trainings on their own schedules, whether on their smart phones or at their home office. UTD also offers three self-paced pathways — Foundations, Leadership and Supervisor Essentials, and Administrative Support Essentials — that employees can complete to earn a digital badge. They’re also encouraged to leverage this training when applying to open positions on campus.

    Some of the Microsoft Teams-based programs UTD established in their first year include: Lessons from Leaders series, BRIGHT Leaders Book Club and Teaching Leadership Compassion (TLC). They also partner with e-learning companies to supplement their internal training materials.

    Dutton and McNally note that sessions don’t always have to be conducted by HR. Campus partners are encouraged to lead trainings that fall within the BRIGHT framework: Bold, Responsible, Inclusive, Growing, High Performing and Transformative. For example, an upcoming book club will be led by a team consisting of the dean of engineering and the athletic director.

    Making UTD an Employer of Choice

    In line with UTD’s commitment to workplace culture, the BRIGHT Leaders program speaks to the needs of a changing workforce. Early-career professionals don’t want to wait five years to be eligible for leadership training, Dutton stresses. “They want access to these leadership opportunities and trainings now.”

    UTD’s flexible professional development training approach helps confront a concerning trend: almost half of higher ed employees (44%) surveyed in The CUPA-HR 2023 Higher Education Employee Retention Survey disagree that they have opportunities for advancement, and one-third (34%) do not believe that their institution invests in their career development. Offering robust, flexible professional development and leadership opportunities is part of UTD’s commitment to be an employer of choice in North Texas.

    For more specifics on the BRIGHT Leaders program, view the recorded webinar. You’ll learn how HR built cross-campus partnerships, how they’ve measured their return on investment and how they’re building on their successes to train future leaders.

    The post UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development appeared first on CUPA-HR.

    Source link

  • Proposed Changes to the H-1B Visa Program – CUPA-HR

    Proposed Changes to the H-1B Visa Program – CUPA-HR

    by CUPA-HR | November 9, 2023

    On October 23, 2023, U.S. Citizenship and Immigration Services (USCIS) issued a proposed rule that aims to improve the H-1B program by simplifying the application process, increasing the program’s efficiency, offering more advantages and flexibilities to both petitioners and beneficiaries, and strengthening the program’s integrity measures.

    Background

    The H-1B visa program is pivotal for many sectors, particularly higher education. It permits U.S. employers to employ foreign professionals in specialty occupations requiring specialized knowledge and a bachelor’s degree or higher or its equivalent. The program is subject to an annual limit of 65,000 visas, with an additional allocation of 20,000 visas reserved for foreign nationals who have earned a U.S. master’s degree or higher. Certain workers are exempt from this cap, including those at higher education institutions or affiliated nonprofit entities and nonprofit or governmental research organizations.

    Highlights of the Proposed Rule

    Prompted by challenges with the H-1B visa lottery, USCIS has prioritized a proposed rule to address the system’s integrity. The move comes after a surge in demand for H-1B visas led to the adoption of a lottery for fair distribution. However, with the fiscal year 2024 seeing a historic 758,994 registrations and over half of the candidates being entered multiple times, there was concern over potential exploitation to skew selection chances. This proposed rule is a direct response to strengthen the registration process and prevent fraud.

    Beyond addressing lottery concerns, the proposal makes critical revisions to underlying H-1B regulations. It seeks to formalize policies currently in place through guidance and tweak specific regulatory aspects.

    Amending the Definition of a “Specialty Occupation.” At present, a “specialty occupation” is identified as a job that requires unique, specialized knowledge in fields like engineering, medicine, education, business specialties, the arts, etc., and it typically mandates a bachelor’s degree or higher in a specific area or its equivalent. USCIS is proposing to refine the definition of a “specialty occupation” to ensure that the required degree for such positions is directly related to the job duties. The proposal specifies that general degrees without specialized knowledge do not meet the criteria, and petitioners must prove the connection between the degree field(s) and the occupation’s duties. The rule would allow for different specific degrees to qualify for a position if each degree directly relates to the occupation’s responsibilities. For example, a bachelor’s degree in either education or chemistry could be suitable for a chemistry teacher’s position if both are relevant to the job. The changes emphasize that the mere possibility of qualifying for a position with an unrelated degree is insufficient, and specific degrees must impart highly specialized knowledge pertinent to the role.

    Amending the Criteria for Specialty Occupation Positions. USCIS is proposing updates to the criteria defining a “specialty occupation” under the Immigration and Nationality Act. This proposal includes a clarification of the term “normally,” which, in the context of a specialty occupation, indicates that a bachelor’s degree is typically, but not always, necessary for the profession. USCIS is aiming to standardize this term to reflect a type, standard, or regular pattern, reinforcing that the term “normally” does not equate to “always.”

    Extending F-1 Cap-Gap Protection. USCIS is proposing to revise the Cap-Gap provisions, which currently extend employment authorization for F-1 students awaiting H-1B visa approval until October 1 of the fiscal year for which H–1B visa classification has been requested. The Cap-Gap refers to the period between the end of an F-1 student’s Optional Practical Training (OPT) and the start of their H-1B status, which can lead to a gap in lawful status or employment authorization. The new proposal seeks to extend this period until April 1 of the fiscal year for which the H-1B visa is filed, or until the visa is approved, to better address processing delays and reduce the risk of employment authorization interruption. To be eligible, the H-1B petition must be legitimate and filed on time. This change is intended to support the U.S. in attracting and maintaining skilled international workers by providing a more reliable transition from student to professional status.

    Cap-Exempt Organizations. USCIS is redefining which employers are exempt from the H-1B visa cap. The proposed changes involve revising the definition of “nonprofit research organization” and “governmental research organization” from being “primarily engaged” in research to conducting research as a “fundamental activity.” This proposed change would enable organizations that might not focus primarily on research, but still fundamentally engage in such activities, to qualify for the exemption. Additionally, USCIS aims to accommodate beneficiaries not directly employed by a qualifying organization but who still perform essential, mission-critical work.

    Deference. USCIS is proposing to codify a policy of deference to prior adjudications of Form I-129 petitions, as delineated in the USCIS Policy Manual, mandating that officers give precedence to earlier decisions when the same parties and material facts recur. This proposal, however, includes stipulations that such deference is not required if there were material errors in the initial approval, if substantial changes in circumstances or eligibility have occurred, or if new and pertinent information emerges that could negatively influence the eligibility assessment.

    Next Steps

    While this summary captures key elements of the proposed changes, our members should be aware that the rule contains other important provisions that warrant careful review. These additional provisions could also significantly impact the H-1B visa program and its beneficiaries, and it is crucial for all interested parties to examine the proposed rule in its entirety to understand its full implications.

    USCIS is accepting public comment on its proposal through December 22, 2023. CUPA-HR is evaluating the proposed revisions and will be working with other higher education associations to submit comprehensive comments for the agency’s consideration. As USCIS moves towards finalizing the proposals within this rulemaking, potentially through one or more final rules depending on the availability of agency resources, CUPA-HR will keep its members informed of all significant updates and outcomes.



    Source link

  • DHS Announces Proposed Pilot Program for Non-E-Verify Employers to Use Remote I-9 Document Examination – CUPA-HR

    DHS Announces Proposed Pilot Program for Non-E-Verify Employers to Use Remote I-9 Document Examination – CUPA-HR

    by CUPA-HR | August 9, 2023

    On August 3, 2023, the Department of Homeland Security (DHS) published a notice in the Federal Register seeking comments on a potential pilot program to allow employers not enrolled in E-Verify to harness remote examination procedures for the Form I-9, Employment Eligibility Verification.

    Background

    DHS’s recent actions are built upon a series of moves aimed at modernizing and making more flexible the employment verification process. On July 25, 2023, the DHS rolled out a final rule enabling the Secretary of Homeland Security to authorize optional alternative examination practices for employers when inspecting an individual’s identity and employment authorization documents, as mandated by the Form I-9. The rule creates a framework under which DHS may implement permanent flexibilities under specified conditions, start pilot procedures with respect to the examination of documents, or react to crises similar to the COVID-19 pandemic.

    Alongside the final rule, DHS published a notice in the Federal Register authorizing a remote document examination procedure for employers who are participants in good standing in E-Verify and announced it would be disclosing details in the near future about a pilot program to a broader category of businesses.

    Key Highlights of the Proposed Non-E-Verify Remote Document Examination Pilot 

    DHS’s proposal primarily revolves around the following points:

    • Purpose: Immigration and Customs Enforcement (ICE) intends to gauge the security impact of remote verification compared to traditional in-person examination of the Form I99. This involves evaluating potential consequences like error rates, fraud and discriminatory practices.
    • Pilot Procedure: The new pilot program would mirror the already authorized alternative method for E-Verify employers, including aspects such as remote document inspection, document retention and anti-discrimination measures.
    • Eligibility: The pilot program is open to most employers unless they have more than 500 employees. However, E-Verify employers are excluded since DHS has already greenlit an alternative for them.
    • Application Process: Interested employers must fill out the draft application form, which DHS has made available online. This form captures details like company information, terms of participation, participant obligations, and more.
    • Information Collection: Employers wishing to join the pilot would be required to complete the formal application linked above. ICE would periodically seek data from these employers, such as the number of new hires or how many employees asked for a physical inspection.
    • Documentation: Participating companies must electronically store clear copies of all supporting documents provided by individuals for the Form I-9. They might also be required to undertake mandatory trainings for detecting fraudulent documents and preventing discrimination.
    • Onsite/Hybrid Employees: Companies might face restrictions or a set timeframe for onsite or hybrid employees, dictating when they must physically check the Form I-9 after the initial remote assessment.
    • Audits and Investigations: All employers, including pilot participants, are liable for audits and evaluations. DHS plans to contrast data from these assessments to discern any systemic differences between the new method and the traditional one.

    What’s Next: Seeking Public Comments by October 2 

    DHS is actively seeking feedback from the public regarding the proposed pilot and the draft application form. The department encourages stakeholders to consider and provide insights on the following points:

    • Practical Utility: Assess if the proposed information requirement is vital for the agency’s proper functioning and whether the data collected will be practically useful.
    • Accuracy and Validity: Analyze the agency’s estimation of the information collection’s burden, ensuring the methods and assumptions are valid.
    • Enhance Information Quality: Offer suggestions to improve the clarity, utility and overall quality of the data collected.
    • Minimize Collection Burden: Propose ways to ease the data collection process for respondents, exploring technological solutions such as electronic submissions.

    In light of this, CUPA-HR plans to carefully evaluate the notice and associated application. Based on its review, CUPA-HR is considering submitting comments to provide valuable insights to DHS. CUPA-HR will keep members apprised of any updates regarding this proposed pilot program and other changes to Form I-9 alternative examination procedures.



    Source link

  • ALP 2023: Another Successful Association Leadership Program Is in the Books – CUPA-HR

    ALP 2023: Another Successful Association Leadership Program Is in the Books – CUPA-HR

    by CUPA-HR | July 26, 2023

    This blog post was contributed by Jennifer Addleman, member of CUPA-HR’s Southern Region board of directors and HR director at Rollins College.

    And that’s a wrap on CUPA-HR’s 2023 Association Leadership Program (ALP) in Omaha, Nebraska! On July 13-14, leaders from CUPA-HR’s national, regional and chapter boards, as well as CUPA-HR’s corporate partners, gathered to discuss higher ed HR challenges, share successes, make connections and build relationships. I was fortunate to attend as a representative from the Southern Region board, and my mind is still reeling from two full days of content and networking with talented HR leaders from across the country. Here are some of my takeaways:

    • Lead with positivity, start with a win, and end with gratitude.
    • So much is happening on the regulatory and legislative front that will affect higher ed and the labor and employment landscape, and CUPA-HR is serving as the voice of higher ed on these issues with lawmakers.
    • The CUPA-HR Knowledge Center continues to be a go-to resource for all things higher ed HR. In addition to HR toolkits that are constantly being updated or added, you’ll also find DEI resources, e-learning courses, a job description index, CUPA-HR’s Higher Ed HR Magazine and more. If you haven’t checked out the Knowledge Center lately, I encourage you to do so!
    • We in higher ed HR are doing important work — what we do matters, and we are impacting lives.
    • CUPA-HR continues to do valuable work in data collection and research — our data is the platinum standard! Learn more about CUPA-HR’s research in the Research Center (find the link in the menu on the CUPA-HR home page).
    • We must continue to make mental health a priority. As HR practitioners, we often prioritize taking care of others, but we should not be ashamed to take care of ourselves first! Find resources in the Mental Health and Health and Well-Being Knowledge Center toolkits.
    • You can walk to Iowa from Omaha! Who knew!

    Sharing some quality time with higher ed HR peers from across the country, commiserating about and discussing strategies to overcome our biggest challenges, and meeting new people and making new connections is what CUPA-HR’s Association Leadership Program is all about. If you’re considering exploring volunteer leadership opportunities within the association, do it! You won’t regret it — in fact, you’re guaranteed to learn and grow, and have a great time doing it!



    Source link

  • Three Elements of a Successful Onboarding Program – CUPA-HR

    Three Elements of a Successful Onboarding Program – CUPA-HR

    by CUPA-HR | September 14, 2022

    Onboarding programs consisting of a brief history lesson about the institution and instructions for how to get a parking pass aren’t likely to inspire new hires. Here are three elements of onboarding programs that go beyond the basics to create a deeper understanding of campus culture and a sense of belonging.

    Orient New Hires to Higher Education

    Learning industry-specific skills and knowledge is essential for employees to thrive in their workplaces. Higher education is no different. New hires must quickly get up to speed on how their departments function within the context of their institution and its mission. This can be overwhelming for anyone, especially someone new to higher education.

    To address this learning curve, CUPA-HR created Understanding Higher Ed Course 1 — An Overview of Higher Education for All Employees. The course is designed to help all higher ed employees understand different types of institutions, terminology, cultural hallmarks of the higher ed work environment, the basics of higher ed funding, and key soft skills that support success in the workplace.

    Create a Sense of Belonging

    A crucial aspect of the workplace that can’t be captured in a new-hire orientation video is the sense of belonging employees experience. And if staff members work remotely, opportunities to connect with coworkers and build community may be even more difficult to achieve.

    To overcome these challenges, the University of Florida’s Academic and Professional Assembly (APA), led by several HR employees, reconsidered their approach to onboarding. Through their Warm Welcome experience they helped create a campus culture that fosters a sense of belonging for new staff. The APA helps spark campus connections by hosting welcome events and small groups where new hires can interact with high-level leaders. During these events, leaders share personal stories and insights about leadership, diversity and inclusion and the value that staff bring in the pursuit of the university’s many goals. This storytelling approach draws out leaders’ personalities, camaraderie, sense of humor and transparency, and allows staff to see the “human” aspect of a large institution. Read more about UF’s Warm Welcome experience to learn how to design a warm welcome experience for your staff.

    Partner With Other Departments

    Onboarding shouldn’t fall solely on HR’s shoulders. Support from many areas of the institution is critical for a successful onboarding program. Additionally, shared responsibility for onboarding can positively affect organizational culture, departmental buy-in and employee retention. Presenters from the University of Colorado Boulder shared their strategic partner model in a 2019 CUPA-HR on-demand webinar “Onboarding: A Strategic Partner Model for Bringing About Cultural Change.” Watch the webinar recording to learn more about UC Boulder’s model to increase employee engagement, retention and productivity while keeping the focus on institutional goals.

    There are many reasons employees are drawn to work at an institution, and a successful onboarding program shows them why they should stay.



    Source link

  • The Wildfire Program Welcomes a New Cohort for 2022-23 – CUPA-HR

    The Wildfire Program Welcomes a New Cohort for 2022-23 – CUPA-HR

    by CUPA-HR | August 3, 2022

    For the higher ed HR community to thrive there must be a pipeline of early-career professionals waiting in the wings, and one way CUPA-HR equips early-career pros to grow in their role and take steps toward their career goals is through the Wildfire program.

    The program, sponsored in part by HigherEdJobs, is a 12-month immersive experience that connects a small, select group of early-career higher ed HR professionals with some of the top leaders in the profession, giving them a variety of learning opportunities.

    Rob Keel, a member of the 2019-20 Wildfire cohort and past president of the CUPA-HR Tennessee Chapter, had this to say about the program: “Wildfire helped open my eyes to the possibilities within higher education HR. The network I gained through my involvement with Wildfire has provided so much support as I navigate my career. If you want to develop relationships that have the power to transform, Wildfire has the power to do just that.”

    As a new year gets underway, we want to congratulate and welcome the Wildfire program participants for 2022-23:

    • TJ Bowie, Equal Opportunity and HR Compliance Manager, Elon University
    • Joy Brownridge, Training and Development Specialist, University of Illinois System
    • Amanda Burshtynsky, Employee Payroll and Insurance Clerk, Genesee Community College
    • Kelleebeth Cantu, HR and Employment Coordinator, Trinity University
    • Audrey Ettesvold, Human Resource Specialist, Idaho State Board of Education
    • Alexis Hanscel, Benefits Manager, Denison University
    • Kathleen Hermacinski, Human Resource Coordinator, Eureka College
    • Anshuma Jain, HR Administrator, Hudson County Community College
    • Jessica Ludwick, Human Resources Consultant, University of North Carolina Wilmington
    • Tracey Pritchard, HR Coordinator, University of Iowa
    • Trevon Smith, HR Generalist, Drake University
    • Christopher Williams, HR Partner, University of Maine System Office

    Interested in joining our 2023-24 cohort? Learn more about the Wildfire program.



    Source link