Tag: Program

  • Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)

    Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


    This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

    The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.

    Starting the case.

    The coverage reports that:

    all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 

    Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.

    I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.

    Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.

    Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.

    Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.

    Denied it.

    The article continues that Yang, the student:

    denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 

    Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.

    Does anyone actually read those things?

    Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”

    With friends like these.

    Also interesting is that, according to the coverage:

    His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 

    That would be a fair point except that the next paragraph is:

    Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 

    That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.

    That guy says:

    “I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”

    Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.

    As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.

    You know what I am going to say.

    To me, that entire Dowd diversion is mostly funny.

    More evidence.

    Back on track, we get even more detail, such as that the exam in question was:

    an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 

    Exam graders argued the AI use was obvious enough. Yang disagrees. 

    Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 

    So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.

    I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.

    Then there’s even more evidence:

    Yang also objects to professors using AI detection software to make their case at the November hearing.  

    He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 

    “Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  

    First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.

    A pattern.

    Then we get this, buried in the news coverage:

    Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  

    He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 

    In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.

    She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 

    I’m sorry — what now?

    Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:

    “re write it, make it more casual, like a foreign student write but no ai.” 

    He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?

    For that one where he left in the prompts to ChatGPT:

    the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 

    Yang was warned, in writing.

    If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.

    On the other hand, he did say he did not do it.

    Findings, review.

    Further:

    But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.

    Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:

    “PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 

    Slow clap.

    And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.

    Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.

    MPR, spare me.

    Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.

    Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:

    Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 

    “As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 

    We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.

    That’s just bad journalism.

    And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:

    it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 

    Absolute joke.

    I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.

    MPR also quotes a student who says:

    she and many other students live in fear of AI detection software.  

    “AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 

    Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.

    Two final bits.

    The MPR story shares:

    In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 

    Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.

    The MPR article wraps up that Yang:

    found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career

    Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.

    And finally:

    “Probably I should think to do something, selling potatoes on the streets or something else,” he said. 

    Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.

    Source link

  • 7 Trends to Inform Online Program Expansion in 2025

    7 Trends to Inform Online Program Expansion in 2025

    As I reviewed the new IPEDS data release last week, I was looking for the data and intelligence that would be most helpful for online enrollment leaders to have in hand to underpin and inform this year’s success. These points, in combination with key trends that became clear in other sources I reviewed late last year will enable online leaders to succeed this year as well as plan for the future.

    Note that I am not discussing changes that may emerge after January 20, but I will be doing so after a long talk I have scheduled with Cheryl Dowd from WCET who tracks online regulations and with whom I will be co-presenting at the RNL National Conference this summer.

    So, what do you need to know?

    1. Online and partially online enrollment continue to dominate growth.

    Four years after the pandemic, more students each year are continuing to decide to enroll in either fully or partially online study. While year-over-year change in every post-pandemic year has seen some “return to the classroom,” when compared with pre-pandemic enrollment (2019), 2.3 million more undergraduates and 450k more graduate students are choosing fully or partially online study. Perhaps more important, 3.2 million fewer undergraduates and 288k fewer graduate students are choosing classroom-only programs. Institutions seeking to grow enrollment must develop processes to quickly determine the best online programs to offer and get them “to market” within 12 months.

    Chart showing the pandemic transformed student preferences as millions of additional students chose online and partially online study

    2. Institutions seeking to grow online enrollment are now competing with non-profit institutions.

    As recently as five years ago, your strongest competition came from for-profit institutions. In some ways, these institutions were easy to beat (excepting their huge marketing budgets). They had taken a beating in the press to the extent that students knew about it, and they were far away and unknown. Today, institutions face no less of a competitive environment, but the institutions dominating the scene – and most likely a students’ search results – are national non-profits. These institutions are, of course, not local so they aren’t well known, but they have not been through the scrutiny which eroded interest in the for-profits. Student search engine results are also now filled with ambitious public and private institutions seeking to “diversity their revenue streams.” As such, institutional marketers need to adjust their strategies focused on successfully positioning their programs in a crowded market, knowing that they can “win” the student over the national online providers if they ensure that they rise to the top of search results.

    Graph showing national non-profits have taken the lead from for-profit institutions.Graph showing national non-profits have taken the lead from for-profit institutions.

    3. Online enrollment growth is being led by non-profit institutions.

    Seventeen of the 20 institutions reporting the greatest growth in online enrollment over the last five years are nonprofit institutions—a mix of ambitious public and private institutions and national non-profits. What is more, the total growth among institutions after the two behemoths far exceeds Southern New Hampshire University and Western Governors University. These nimble and dynamic institutions include a variety of institution types (with community colleges well represented) across the higher education sector. Institutions seeking to grow online enrollment should research what these institutions are offering and how they are positioning their programs in the market and emulate some of their best practices.

    Chart showing that the greatest online growth is among non-profit colleges.Chart showing that the greatest online growth is among non-profit colleges.

    4. New graduate program growth is dominated by online/partially online offerings.

    In 2024, a research study by Robert Kelchen documented growth in the number of available master’s programs in the U.S. over the last 15 years. Not only did Kelchen document a massive expansion in availability (over the 15-year period, institutions launched nearly 14,000 new master’s programs on a base of about 20,000), but also that the pace of launching online or hybrid programs dramatically outpaces classroom programs. This rise in available offerings far outpaces the rate of growth of the online student market, resulting in significantly higher levels of competition for each online student. Institutions seeking to grow their online footprint must ensure that they fully understand both the specific demand dynamics for each of their programs and the specifics of what online students want in their program. A mismatch on either factor will inhibit growth.

    Graph showing online/hybrid programs are driving new program development.Graph showing online/hybrid programs are driving new program development.

    5. Online success is breeding scrutiny of outcomes.

    We all know something of the power of social media today. This was reinforced for me recently by an Inside Higher Education story which focused on the 8-year rates of degree completion among the biggest online providers. The story was triggered by a widely read Linked IN post and followed up by numerous other stories and posts and comments across the platform. This is just the kind of exposure that is most likely to generate real scrutiny of the outcomes of online learning – which were already taking shape over the last year or more. In fact, this focus on outcomes ended up as one of the unfulfilled priorities of the Biden Education Department. I have long said that institutions seeking to enter the online space have an opportunity to tackle some of the quality issues that first plagued the for-profits, now challenge the national online non-profits, and will confront others if not addressed soon.

    Images showing online skeptics are raising concerns about completion rates among larger online providers.Images showing online skeptics are raising concerns about completion rates among larger online providers.

    6. Key preferences for online study have been changed by the pandemic.

    RNL’s own 2024 online student survey surfaced dozens of important findings that online leaders should consider as they chart their course. Two findings stand out as reflecting profound changes in online student preferences, and both are likely the result of pandemic-era experiences. First, all but 11 percent of online students told us that they are open to at least some synchronous activities in their program, likely the result of hundreds of online meetings during the pandemic. Similarly, they told us that the ideal time to communicate with recruiters/counselors from online programs is now during business hours. This is also likely to be related to the pandemic period, in which millions of people working from home began to regularly contend with some personal business during their day. Institutions should assess both of these factors as they think through student engagement (to address point #5), and the intense competition of the online space (to address point #3).

    Pie charts showing how pandemic experiences have shaped student preferences for synchronous/asynchronous classes and when to follow-upPie charts showing how pandemic experiences have shaped student preferences for synchronous/asynchronous classes and when to follow-up

    7. Contracting institutions are not focusing on online enrollment.

    Finally, we return to the new IPEDS data to see that institutions that have experienced the greatest enrollment contraction over the last five years demonstrate almost no access to fully online study (dark blue lines in the chart below), and only limited access to programs in which students can enroll in both online and classroom courses (light blue lines). Even where there has been some online or partially online growth, these efforts have not been given adequate attention to counterbalance contraction among students enrolled in classroom-only programs (green lines). These data again make it clear (as stated in point #1) that institutions facing classroom-only contraction must either amend their goals to account for reduced enrollment or determine which online or hybrid programs would be most attractive to students in their region and then ensure that such offerings are visible in a highly competitive higher education market.

    Chart showing contracting institutions are not focusing on online.Chart showing contracting institutions are not focusing on online.

    Explore more at our webinar

    Webinar: 5 Enrollment Trends You Need to Know for 2025Webinar: 5 Enrollment Trends You Need to Know for 2025

    Join us for a deeper dive into trends during our webinar, 5 Enrollment Trends You Need to Know for 2025. This discussion with me and a number of my RNL expert colleagues will look at research and trends that should shape strategic and tactical planning over the next 12 months. Particularly, as we enter what has been identified as the first year of the “demographic cliff,” data-informed decision-making will be more important to enrollment health than ever before. Register now.

    Source link

  • New Program Strategy: Go Deep, Not Wide

    New Program Strategy: Go Deep, Not Wide

    How to Strategically Expand Your Online Adult Degree Programs

    So you’ve built a successful online adult degree program. No small feat. Now you need to keep your foot on the gas to keep the momentum going. 

    Your first instinct might be to “go wide” with your program expansion strategy by launching a variety of new, unrelated programs to pair with your successful offering. While this diversification strategy might reap great rewards for consumer packaged goods giants like Unilever and Procter & Gamble, higher education is different. Your institution is different.  

    I find myself making the following recommendation over and over again when it comes to expanding online degree programs: Go deep, not wide. 

    This means building upon the success of your existing program by developing specialized offerings within the same field. The “go deep” method might not be the most popular, but in my experience, it’s often the most effective. Let’s break it down further — or should I say, dig deeper — to see if this approach is right for your school. 

    What Does Going Wide Mean for Your Online Adult Degree Programs?

    Let’s start with a hypothetical example: You have established a successful online Master of Business Administration (MBA) program with a positive reputation in the region. 

    Recently, you’ve heard cybersecurity and nursing degree programs are experiencing industry growth, so you decide to pursue programs in those areas next to build out a wider range of offerings. 

    Unfortunately, this strategic path can be a mistake. Here’s why: 

    However, expanding within the existing framework of business administration can allow for the amplification of this established brand equity, rather than starting from scratch with each new offering.

    Why Going Deep Is More Effective

    In higher education, the smart, strategic allocation of resources is crucial. You could put your institution’s limited resources toward a whole new program, such as a Bachelor of Science in Nursing (BSN) program or a Master of Science in Cybersecurity program. Or, you could just attach a new or adjacent offering to your successful online MBA program to channel your resources into an established program realm. 

    Forget efficacy for a moment. Which strategy sounds more efficient? 

    The good news is that going deep in one area of program offerings is often more effective and efficient. Instead of developing an entirely new adult degree program from scratch, you can simply add value to your existing online business program. 

    This might come in the form of added concentration options, such as MBA concentrations in entrepreneurship, accounting, finance, marketing, management, or strategic communications. 

    It could also involve adding another relevant degree program within the same area of study. For example, since you’re seeing a lot of success with your MBA program, you could add a finance or accounting degree program to build on the success and reputation of the established program.

    Key Benefits of Going Deep With Your Online Adult Degree Programs

    I’ve had experiences both ways: some institutions go wide, others go deep. For those that go wide, I’ve often seen siloed marketing efforts, inefficient allocations of resources, and sporadic and unpredictable enrollment. For those that go deep, I see the following benefits: 

    More Students Attracted

    Broadened appeal for students already interested in the primary program: By offering more concentrations within a well-established program, or adjacent degrees within the same field, your institution can appeal to a broader range of interests and career goals within your current student audience base.

    More options for prospective students due to increased specialization: Specialized degrees and concentrations allow students to tailor their education to their specific interests and career paths, making the program more attractive to applicants seeking focused expertise.

    Increased Marketing Efficiency

    Ability to leverage existing web pages and SEO for the main program: Concentration pages can be added as subpages to the main program’s page, which likely already has a strong search engine optimization (SEO) presence. This setup benefits from the existing search engine rankings and requires less effort than starting marketing from scratch for a new program.

    Faster path to high search rankings for new concentrations, creating a marketing loop: The SEO efforts for the main program boost the visibility of the new concentrations, which in turn contribute to the overall authority and ranking of the main program’s page. This synergy creates a self-reinforcing cycle that enhances the visibility of all offerings.

    Enhanced paid marketing efficiencies: Adding concentrations in areas where significant traffic already exists for broad terms — like “MBA,” “business degree,” or “finance degree” for an MBA program — allows institutions to more effectively utilize their paid advertising budgets. Expanding the program options for your existing traffic allows you to improve your click-to-lead conversion rates, increase your number of leads, and enhance your downstream successes in areas such as enrollments and completions. This approach allows for a more efficient use of marketing investments, providing more options for prospective students within the same budget.

    Faster Accreditation Process

    Streamlined accreditation process by expanding within an already accredited program: Adding concentrations within an existing program simplifies the accreditation process. Because the core program is already accredited, expanding it with concentrations requires fewer approvals and less bureaucracy than launching an entirely new program.

    Ready to Go Deep With One of Your Online Adult Degree Programs?

    If you’ve seen success with an online adult degree program offering, you’ve already taken a momentous step toward growth — which is something to be proud of. It also creates massive opportunity, and Archer Education is poised to help you capitalize on it. 

    Archer is different from other agencies. We work as your online growth enablement partner, helping you to foster self-sufficiency over the long haul through collaboration, storytelling, and cutting-edge student engagement technology. 

    We’ve helped dozens of institutions increase enrollment and retention through a going deep approach, and your institution could be next. And once you’ve solidified the reputation and success of your core online offering by going deep, we’ll be ready to help you pivot to a wider approach to expand your position in online learning.

    Contact us today to learn more about what Archer can do for you. 

    Subscribe to the Higher Ed Marketing Journal:

    Source link

  • AI in Practice: Using ChatGPT to Create a Training Program

    AI in Practice: Using ChatGPT to Create a Training Program

    by Julie Burrell | September 24, 2024

    Like many HR professionals, Colorado Community College System’s Jennifer Parker was grappling with an increase in incivility on campus. She set about creating a civility training program that would be convenient and interactive. However, she faced a considerable hurdle: the challenges of creating a virtual training program from scratch, solo. Parker’s creative answer to one of these challenges — writing scripts for her under-10-minute videos — was to put ChatGPT to work for her. 

    How did she do it? This excerpt from her article, A Kinder Campus: Building an AI-Powered, Repeatable and Fun Civility Training Program, offers several tips.

    Using ChatGPT for Training and Professional Development

    I love using ChatGPT. It is such a great tool. Let me say that again: it’s such a great tool. I look at ChatGPT as a brainstorming partner. I don’t use it to write my scripts, but I do use it to get me started or to fix what I’ve written. I ask questions that I already know the answer to. I’m not using it for technical guidance in any way.

    What should you consider when you use ChatGPT for scriptwriting and training sessions?

    1. Make ChatGPT an expert. In my prompts, I often use the phrase, “Act like a subject matter expert on [a topic].” This helps define both the need and the audience for the information. If I’m looking for a list of reasons why people are uncivil on college campuses, I might prompt with, “Act like an HR director of a college campus and give me a list of ways employees are acting uncivil in the workplace.” Using the phrase above gives parameters on the types of answers ChatGPT will offer, as well as shape the perspective of the answers as for and about higher ed HR.
    2. Be specific about what you’re looking for. “I’m creating a training on active listening. This is for employees on a college campus. Create three scenarios in a classroom or office setting of employees acting unkind to each other. Also provide two solutions to those scenarios using active listening. Then, create a list of action steps I can use to teach employees how to actively listen based on these scenarios.” Being as specific as possible can help get you where you want to go. Once I get answers from ChatGPT, I can then decide if I need to change direction, start over or just get more ideas. There is no wrong step. It’s just you and your partner figuring things out.
    3. Sometimes ChatGPT can get stuck in a rut. It will start giving you the same or similar answers no matter how you reword things. My solution is to start a new conversation. I also change the prompt. Don’t be afraid to play around, to ask a million questions, or even tell ChatGPT it’s wrong. I often type something like, “That’s not what I’m looking for. You gave me a list of______, but what I need is ______. Please try again.” This helps the system to reset.
    4. Once I get close to what I want, I paste it all in another document, rewrite, and cite my sources. I use this document as an outline to rewrite it all in my own voice. I make sure it sounds like how I talk and write. This is key. No one wants to listen to ChatGPT’s voice. And I guarantee that people will know if you’re using its voice — it has a very conspicuous style. Once I’ve honed my script, I ensure that I find relevant sources to back the information up and cite the sources at the end of my documents, just in case I need to refer to them.

    What you’ll see here is an example of how I used ChatGPT to help me write the scripts for the micro-session on conflict. It’s an iterative but replicable process. I knew what the session would cover, but I wanted to brainstorm with ChatGPT.

    Once I’ve had multiple conversations with the chatbot, I go back through the entire script and pick out what I want to use. I make sure it’s in my own voice and then I’m ready to record. I also used ChatGPT to help with creating the activities and discussion questions in the rest of the micro-session.

    I know using ChatGPT can feel overwhelming but rest assured that you can’t really make a mistake. (And if you’re worried the machines are going to take over, throw in a “Thank you!” or “You’re awesome!” occasionally for appeasement’s sake.)

    About the author: Jennifer Parker is assistant director of HR operations at the Colorado Community College System.

    More Resources

    • Read Parker’s full article on creating a civility training program with help from AI.
    • Learn more about ChatGPT and other chatbots.
    • Explore CUPA-HR’s Civility in the Workplace Toolkit.



    Source link

  • Getting Organic Engagement in a Mental Health Awareness Program – CUPA-HR

    Getting Organic Engagement in a Mental Health Awareness Program – CUPA-HR

    by Julie Burrell | July 15, 2024

    Employers have enormous sway over employee health. That’s one of the major takeaways from the CUPA-HR webinar An Integrated Approach to Fostering Workplace Well-Being, led by Mikel LaPorte and Laura Gottlieb of the University of Texas Health Science Center at San Antonio. They collected eye-opening data that helped them make the case to leadership for a mental health awareness campaign. In a Workforce Institute report they cited, employees say that managers have a greater impact on their mental health than their doctors or therapists — roughly the same impact as their spouse!

    In the webinar, LaPorte and Gottlieb discussed how their robust, research-driven suite of content is helping to normalize discussions of mental health on campus. They’re even being asked to present their well-being trainings at meetings, a sign that their push for mental health awareness is resonating organically.

    A One-Stop Shop for Mental Health

    The awareness campaign centers on their wellness website, which acts as a one-stop shop for campus mental health. (Right now, the site is internal-facing only, but the recorded webinar has rich details and example slides.) There, they organize their podcast episodes, articles and curated content, as well as marshal all the mental health resources currently available to staff, students and faculty.

    They’ve also found a way to make this initiative sustainable for HR in the long term by recruiting faculty subject matter experts to write on topics such as compassion fatigue. These experts are then interviewed on their quarterly podcast, Well-Being Wisdom. Tapping into faculty experts also ensures rigor in their sources, a significant step in getting buy-in from a population who requires well-vetted wellness practices.

    Getting Organic Engagement Starts With Leaders  

    LaPorte and Gottlieb have faced the typical challenge when rolling out a new campaign: engagement. Email fatigue means that sending messages through this channel isn’t always effective. But they’ve started to look at ways of increasing engagement through different communication channels, often in person.

    Direct outreach to team leaders is key. They regularly attend leadership meetings and ask different schools and departments to invite them in for facilitated mental health activities. (In the webinar, you can practice one of these, a brief guided meditation.) They’ve developed a leader guide and toolkit, including turnkey slides leaders can insert into decks to open or close discussions. Leaders are supplied with “can opener” discussion items, such as

    • “I made a difference yesterday when I…”
    • “Compassion is hardest when…”
    • “I show up every day because…”

    Not only does this provide opportunities to normalize conversations around mental health, but it also strengthens relationship-building — a key metric in workplace well-being. As CUPA-HR has found, job satisfaction and well-being is the strongest predictor of retention by far for higher ed employees.

    Campus leaders are now reaching out to the learning and leadership development team to request mental health activities at meetings. Some of the workshops offered include living in the age of distraction, mindful breathing techniques, and the science of happiness. For more details on UT Health San Antonio’s well-being offerings, including ways they’re revamping their program this fiscal year (think: less is more), view the recorded webinar here.



    Source link

  • Toward a Sector-Wide AI Tutor R&D Program –

    Toward a Sector-Wide AI Tutor R&D Program –

    EdTech seems to go through perpetual cycles of infatuation and disappointment with some new version of a personalized one-on-one tutor available to every learner everywhere. The recent strides in generative AI give me hope that the goal may finally be within reach this time. That said, I see the same sloppiness that marred so many EdTech infatuation moments. The concrete is being poured on educational applications that use a very powerful yet inherently unpredictable technology in education. We will build on a faulty foundation if we get it wrong now.

    I’ve seen this happen countless times before, both with individual applications and with entire application categories. For example, one reason we don’t get a lot of good data from publisher courseware and homework platforms is that many of them were simply not designed with learning analytics in mind. As hard as that is to believe, the last question we seem to ask when building a new EdTech application is “How will we know if it works?” Having failed to consider that question when building the early versions of their applications, publishers have had a difficult time solving for it later.

    In this post, I propose a programmatic, sector-wide approach to the challenge of building a solid foundation for AI tutors, balancing needs for speed, scalability, and safety.

    The temptation

    Before we get to the details, it’s worth considering why the idea of an AI tutor can be so alluring. I have always believed that education is primal. It’s hard-wired into humans. Not just learning but teaching. Our species should have been called homo docens. In a recent keynote on AI and durable skills, I argued that our tendency to teach and learn from each other through communications and transportation technologies formed the engine of human civilization’s advancement. That’s why so many of us have a memory of a great teacher who had a huge impact on our lives. It’s why the best longitudinal study we have, conducted by Gallup and Purdue University, provides empirical evidence that having one college professor who made us excited about learning can improve our lives across a wide range of outcomes, from economic prosperity to physical and mental health to our social lives. And it’s probably why the Khans’ video gives me chills:

    Check your own emotions right now. Did you have a visceral reaction to the video? I did.

    Unfortunately, one small demonstration does not prove we have reached the goal. The Khanmingo AI tutor pilot has uncovered a number of problems, including factual errors like incorrect math and flawed tutoring. (Kudos to Khan Academy for being open about their state of progress by the way.)

    We have not yet achieved that magical robot tutor. How do we get there? And how will we know that we’ve arrived?

    Start with data scientists, but don’t stop there

    As I read some of the early literature, I see an all-too-familiar pattern: technologists build the platforms, data scientists decide which data are important to capture, and they consult learning designers and researchers. However, all too often, the research design clearly originates from a technologist’s perspective, showing relatively little knowledge of detailed learning science methods or findings. A good example of this mindset’s strengths and weaknesses is Google’s recent paper, “Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach“. It reads like a paper largely concieved by technologists who work on improving generative AI and sharpened up by educational research specialists they consulted with after they already had the research project largely defined.

    The paper proposes evaluation rubrics for five dimensions of generative AI tutors:

    • Clarity and Accuracy of Responses: This dimension evaluates how well the AI tutor delivers clear, correct, and understandable responses. The focus is on ensuring that the information provided by the AI is accurate and easy for students to comprehend. High clarity and accuracy are critical for effective learning and avoiding the spread of misinformation.
    • Contextual Relevance and Adaptivity: This dimension assesses the AI’s ability to provide responses that are contextually appropriate and adapt to the specific needs of each student. It includes the AI’s capability to tailor its guidance based on the student’s current understanding and the specific learning context. Adaptive learning helps in personalizing the educational experience, making it more relevant and engaging for each learner.
    • Engagement and Motivation: This dimension measures how effectively the AI tutor can engage and motivate students. It looks at the AI’s ability to maintain students’ interest and encourage their participation in the learning process. Engaging and motivating students is essential for sustained learning and for fostering a positive educational environment.
    • Error Handling and Feedback Quality: This dimension evaluates how well the AI handles errors and provides feedback. It examines the AI’s ability to recognize when a student makes a mistake and to offer constructive feedback that helps the student understand and learn from their errors. High-quality error handling and feedback are crucial for effective learning, as they guide students towards the correct understanding and improvement.
    • Ethical Considerations and Bias Mitigation: This dimension focuses on the ethical implications of using AI in education and the measures taken to mitigate bias. It includes evaluating how the AI handles sensitive topics, ensures fairness, and respects student privacy. Addressing ethical considerations and mitigating bias are vital to ensure that the AI supports equitable learning opportunities for all students.

    Of these, the paper provides clear rubrics for the first four and is a little less concrete on the fifth. Notice, though, that most of these are similar dimensions that generative AI companies use to evaluate their products generically. That’s not bad. On the contrary, establishing standardized, education-specific rubrics with high inter-rater reliability across these five dimensions is the first component of the programmatic, sector-wide approach to AI tutors that we need. Notice these are all qualitative assessments. That’s not bad but, for example, we do have quantitative data available on error handling in the form of feedback and hints (which I’ll delve into momentarily).

    That said, the paper lacks many critical research components, particularly regarding the LearnLM-Tutor software the researchers were testing. Let’s start with the authors not providing outcomes data anywhere in the 50-page paper. Did LearnLM-Tutor improve student outcomes? Make them worse? Have no effect? Work better in some contexts than others? We don’t know.

    We also don’t know how LearnLM-Tutor incorporates learning science. For example, on the question of cognitive load, the authors write,

    We designed LearnLM Tutor to manage cognitive load by breaking down complex tasks into smaller, manageable components and providing scaffolded support through hints and feedback. The goal is to maintain an optimal balance between intrinsic, extraneous, and germane cognitive load.

    Towards Responsible Development ofGenerative AI for Education: An Evaluation-Driven Approach

    How, specifically, did they do this? What measures did they take? What relevant behaviors were they able to elicit from their LLM-based tutor? How are those behaviors grounded in specific research findings about cognitive load? How closely do they reproduce the principals that produced the research findings they’re drawing from? And did it work?

    We don’t know.

    The authors are also vague about Intelligent Tutoring Systems (ITS) research. They write,

    Systematic reviews and meta-analyses have shown that intelligent tutoring systems (ITS) can significantly improve student learning outcomes. For example, Kulik and Fletcher’s meta-analytic review demonstrates that ITS can lead to substantial improvements in learning compared to traditional instructional methods.

    Towards Responsible Development ofGenerative AI for Education: An Evaluation-Driven Approach

    That body of research was conducted over a relatively small number of ITS implementations because a relatively small number of these systems exist and have published research behind them. Further, the research often cites specific characteristics of these tutoring systems that lead to positive outcomes, with supporting data. Which of these characteristics does LearnLM Tutor support? Why do we have reason to believe that Google’s system will achieve the same results?

    We don’t know.

    I’m being a little unfair to the authors by critiquing the paper for what it isn’t about. Its qualitative, AI-aligned assessments are real contributions. They are necessary for a programmatic, sector-wide approach to AI tutor development. They simply are not sufficient.

    ITS data sets for fine-tuning

    ITS research is a good place to start if we’re looking to anchor our AI tutor improvement and testing program in solid research with data sets and experimental protocols that we can re-use and adapt. The first step is to explore how we can utilize the existing body of work to improve AI tutors today. The end goal is to develop standards for integrating the ongoing ITS research (and other data-backed research streams) into continuous improvement of AI tutors.

    One key short-term opportunity is hints and feedback. If, for the moment, we stick with the notion of a “tutor” as software engaging in adaptive, turn-based coaching of students on solving homework problems, then hints and feedback are core to the tutor’s function. ITS research has produced high-quality, publicly available data sets with good findings on these elements. The sector should construct, test, and refine an LLM fine-tuning data set on hints and feedback. This work must include developing standards for data preprocessing, quality assurance, and ethical use. These are non-trivial but achievable goals.

    The hints and feedback work could form a beachhead. It would help us identify gaps in existing research, challenges in using ITS data this way, and the effectiveness of fine-tuning. For example, I’d be interested in seeing whether the experimental designs used in hints and feedback ITS research papers could be replicated with an LLM that has been fine-tuned using the research data. In the process, we want to adopt and standardize protocols for preserving student privacy, protecting author rights, and other concerns that are generally taken into account in high-quality IRB-approved studies. These practices should be baked into the technology itself when possible and supported by evaluation rubrics when it is not.

    While this foundational work is being undertaken, the ITS research community could review its other findings and data sets to see which additional research data sets could be harnessed to improve LLM tutors and develop a research agenda that strengthens the bridge being built between that research and LLM tutoring.

    The larger limitations of this approach will likely spring the uneven and relatively sparse coverage of course subjects, designs, and student populations. We can learn a lot about developing a strategy for uses these sorts of data from ITS research. But to achieve the breadth and depth of data required, we’ll need to augment this body of work with another approach that can scale quickly.

    Expanding data sets through interoperability

    Hints and feedback are great examples of a massive missed opportunity cost. Virtually all LMSs, courseware, and homework platforms support feedback. Many also support hints. Combined, these systems represent a massive opportunity to gather data about usage and effectiveness of hints and feedback across a wide range of subjects and contexts. We already know how the relevant data need to be represented for research purposes because we have examples from ITS implementations. Note that these data include both design elements—like the assessment question, the hints, the feedback, and annotations about the pedagogical intent—and student performance when they use the hints and feedback. So if, for example, we were looking at 1EdTech standards, we would need to expand both Common Cartridge and Caliper standards to incorporate these elements.

    This approach offers several benefits. First, we would gain access to massive cross-platform data sets that could be used to fine-tune AI models. Second, these standards would enable scaled platforms like LMSs to support proven metheds for testing the quality of hints and feedback elements. Doing so would provide benefit to students using today’s platforms while enabling improvement of the training data sets for AI tutors. The data would be extremely messy, especially at first. But the interoperability would enable a virtuous cycle of continuous improvement.

    The influence of interoperability standards on shaping EdTech is often underestimated and misunderstood. !EdTech was first created when publishers realized they needed a way to get their content into new teaching systems that were then called Instructional Management Systems (IMS). Common Cartridge was the first standard created by the organization now known as 1EdTech. Later, Common Cartridge export made migration from one LMS to another much more feasible, thus aiding in breaking the product category out of what was then a virtual monopoly. And I would guess that perhaps 30% or more of the start-ups at the annual ASU+GSV conference would not exist if they could not integrate with the LMS via the Learning Tool Interoperability (LTI) standard. Interoperability is a vector for accelerating change. Creating interoperabiltiy around hints and feedback—including both the importing of them into learning systems and passing student performance impact data—could accelerate the adoption of effective interactive tutoring responses, whether they are delivered by AI or more traditional means.

    Again, hints and feedback are the beachhead, not the end game. Ultimately, we want to capture high-quality training data across a broad range of contexts on the full spectrum of pedagogical approaches.

    Capturing learning design

    If we widen the view beyond the narrow goal of good turn-taking tutorial responses, we really want our AI to understand the full scope of pedagogical intent and which pedagogical moves have the desired effect (to the degree the latter is measurable). Another simple example of a construct we often want to capture in relation to the full design is the learning objective. ChatGPT has a reasonably good native understanding of learning objectives, how to craft them, and how they relate to gross elements of a learning design like assessments. It could improve significantly if it were trained on annotated data. Further, developing annotations for a broad spectrum of course design elements could improve its tutoring output substantially. For example, well-designed incorrect answers to questions (or “distractors”) often test for misconceptions regarding a learning objective. If distractors in a training set were specifically tagged as such, the AI could better learn to identify and probe for misconceptions. This is a subtle and difficult skill even for human experts but it is also a critical capability for a tutor (whether human or otherwise).

    This is one of several reasons why I believe focusing effort on developing AI learning design assistants supporting current-generation learning platforms is advantageous. We can capture a rich array of learning design moves at design time. Some of these we already know how to capture through decades of ITS design. Others are almost completely dark. We have very little data on design intent and even less on the impact of specific design elements on achieving the intended learning goals. I’m in the very early stages of exploring this problem now. Despite having decades of experience in the field, I am astonished at the variability in learning design approaches, much of which is motivated and little of which is tested (or even known within individual institutions).

    On the other side, at-scale platforms like LMSs have implemented many features in common that are not captured in today’s interoperability standards. For example, every LMS I know of implements learning objectives and has some means of linking them to activities. Implementation details may vary. But we are nowhere close to capturing even the least-common-denominator functionality. Importantly, many of these functions are not widely used because of the labor involved. While LMSs can link learning objectives to learning activities, many course builders don’t do it. If an AI could help capture these learning design relationships, and if it could export content to a learning platform in a standard format that preserves those elements, we would have the foundations for more useful learning analytics, including learning design efficacy analytics. Those analytics, in turn, could drive improvement of the course designs, creating a virtuous cycle. These data could then be exported for model training (with proper privacy controls and author permissions, of course). Meanwhile, less common features such as flagging a distractor as testing for a misconception could be included as optional elements, creating positive pressure to improve both the quality of the learning experiences delivered in current-generation systems and the quality of the data sets for training AI.

    Working at design time also puts a human in the loop. Let’s say our workflow follows these steps:

    1. The AI is prompted to conduct turn-taking design interviews of human experts, following a protocol intended to capture all the important design elements.
    2. The AI generates a draft of the learning design. Behind the scenes, the design elements are both shaped by and associated with the metadata schemas from the interoperability standards.
    3. The human experts edit the design. These edits are captured, along with annotations regarding the reasons for the edits. (Think Word or Google Docs with comments.) This becomes one data set that can be used to further fine-tune the model, either generally or for specific populations and contexts.
    4. The designs are exported using the interoperability standards into production learning platforms. The complementary learning efficacy analytics standards provide telemetry on the student behavior and performance within a given design. This becomes another data set that could potentially be used for improving the model.
    5. The human learning designers improve the course designs based on the standards-enabled telemetry. They test the revised course designs for efficacy. This becomes yet another potential data set. Given this final set in the chain, we can look at designer input into the model, the model’s output, the changes human designers made, and improved iterations of the original design—all either aggregated across populations and contexts or focused on a specific population and context.

    This can be accomplished using the learning platforms that exist today, at scale. Humans would always supervise and revise the content before it reaches the students, and humans would decide which data they would share under what conditions for the purposes of model tuning. The use of the data and the pace of movement toward student-facing AI become policy-driven decisions rather than technology-driven. At each of the steps above, humans make decisions. The process allows for control and visibility regarding the plethora of ethical challenges that face integrating AI into education. Among other things, this workflow creates a policy laboratory.

    This approach doesn’t rule out simultaneously testing and using student-facing AI immediately. Again, that becomes a question of policy.

    Conclusion

    My intention here has been to outline a suite of “shovel-ready” initiatives that could be implemented realitvely quickly at scale. It is not comprehensive; nor does it attempt to even touch the rich range of critical research projects that are more investigational. On the contrary, the approach I outline here should open up a lot of new territory for both research and implementation while ensuring the concrete already being poured results in a safe, reliable, science- and policy-driven foundation.

    We can’t just sit by and let AI happen to us and our students. Nor can we let technologists and corporations become the primary drivers of the direction we take. While I’ve seen many policy white papers and AI ethics rubrics being produced, our approach to understanding the potential and mitigating the risks of EdTech AI in general and EdTech tutors in particular is moving at a snail’s pace relative to product development and implementation. We have to implement a broad, coordinated response.

    Now.

    Source link

  • UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development

    UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development

    In 2020, the human resources team at the University of Texas at Dallas was set to launch its leadership and professional development program, the culmination of 18 months of dedicated work. As the pandemic took hold, the question confronting Colleen Dutton, chief human resources officer, and her team was, “Now what do we do?” In their recent webinar for CUPA-HR, Dutton and Jillian McNally, a talent development specialist, explained how their COVID-19 pivot was a blessing in disguise, helping them completely reconstruct leadership training from the ground up.

    The resulting, reimagined program — BRIGHT Leaders — received a 2023 CUPA-HR Innovation Award for groundbreaking thinking in higher ed HR. BRIGHT Leaders speaks to the needs of today’s employees, who desire professional development programs that are flexible and encourages everyone on campus to lead from where they are.

    An All-Access Pass for Career Development

    UTD innovated by first addressing the needs of remote and hybrid employees. Recognizing that “our workforce was never going to be the same after COVID,” Dutton says, they transformed their original plan from an in-person, cohort model into an accessible, inclusive training program they call an “all-access pass.” Any employee can take any leadership training session at any time. No matter their position or leadership level, all staff and faculty (and even students) are welcome to attend, and there’s no selective process that limits participation.

    Their new, all-access approach inspired a mantra within HR: “Organizations that treat every employee as a leader create the best leaders and the best cultures.” This open-access philosophy means that parking attendants and vice presidents might be in the same leadership development session. Employees attend trainings on their own schedules, whether on their smart phones or at their home office. UTD also offers three self-paced pathways — Foundations, Leadership and Supervisor Essentials, and Administrative Support Essentials — that employees can complete to earn a digital badge. They’re also encouraged to leverage this training when applying to open positions on campus.

    Some of the Microsoft Teams-based programs UTD established in their first year include: Lessons from Leaders series, BRIGHT Leaders Book Club and Teaching Leadership Compassion (TLC). They also partner with e-learning companies to supplement their internal training materials.

    Dutton and McNally note that sessions don’t always have to be conducted by HR. Campus partners are encouraged to lead trainings that fall within the BRIGHT framework: Bold, Responsible, Inclusive, Growing, High Performing and Transformative. For example, an upcoming book club will be led by a team consisting of the dean of engineering and the athletic director.

    Making UTD an Employer of Choice

    In line with UTD’s commitment to workplace culture, the BRIGHT Leaders program speaks to the needs of a changing workforce. Early-career professionals don’t want to wait five years to be eligible for leadership training, Dutton stresses. “They want access to these leadership opportunities and trainings now.”

    UTD’s flexible professional development training approach helps confront a concerning trend: almost half of higher ed employees (44%) surveyed in The CUPA-HR 2023 Higher Education Employee Retention Survey disagree that they have opportunities for advancement, and one-third (34%) do not believe that their institution invests in their career development. Offering robust, flexible professional development and leadership opportunities is part of UTD’s commitment to be an employer of choice in North Texas.

    For more specifics on the BRIGHT Leaders program, view the recorded webinar. You’ll learn how HR built cross-campus partnerships, how they’ve measured their return on investment and how they’re building on their successes to train future leaders.

    The post UT Dallas’s BRIGHT Leaders Program: An All-Access Approach to Leadership Training and Career Development appeared first on CUPA-HR.

    Source link

  • Proposed Changes to the H-1B Visa Program – CUPA-HR

    Proposed Changes to the H-1B Visa Program – CUPA-HR

    by CUPA-HR | November 9, 2023

    On October 23, 2023, U.S. Citizenship and Immigration Services (USCIS) issued a proposed rule that aims to improve the H-1B program by simplifying the application process, increasing the program’s efficiency, offering more advantages and flexibilities to both petitioners and beneficiaries, and strengthening the program’s integrity measures.

    Background

    The H-1B visa program is pivotal for many sectors, particularly higher education. It permits U.S. employers to employ foreign professionals in specialty occupations requiring specialized knowledge and a bachelor’s degree or higher or its equivalent. The program is subject to an annual limit of 65,000 visas, with an additional allocation of 20,000 visas reserved for foreign nationals who have earned a U.S. master’s degree or higher. Certain workers are exempt from this cap, including those at higher education institutions or affiliated nonprofit entities and nonprofit or governmental research organizations.

    Highlights of the Proposed Rule

    Prompted by challenges with the H-1B visa lottery, USCIS has prioritized a proposed rule to address the system’s integrity. The move comes after a surge in demand for H-1B visas led to the adoption of a lottery for fair distribution. However, with the fiscal year 2024 seeing a historic 758,994 registrations and over half of the candidates being entered multiple times, there was concern over potential exploitation to skew selection chances. This proposed rule is a direct response to strengthen the registration process and prevent fraud.

    Beyond addressing lottery concerns, the proposal makes critical revisions to underlying H-1B regulations. It seeks to formalize policies currently in place through guidance and tweak specific regulatory aspects.

    Amending the Definition of a “Specialty Occupation.” At present, a “specialty occupation” is identified as a job that requires unique, specialized knowledge in fields like engineering, medicine, education, business specialties, the arts, etc., and it typically mandates a bachelor’s degree or higher in a specific area or its equivalent. USCIS is proposing to refine the definition of a “specialty occupation” to ensure that the required degree for such positions is directly related to the job duties. The proposal specifies that general degrees without specialized knowledge do not meet the criteria, and petitioners must prove the connection between the degree field(s) and the occupation’s duties. The rule would allow for different specific degrees to qualify for a position if each degree directly relates to the occupation’s responsibilities. For example, a bachelor’s degree in either education or chemistry could be suitable for a chemistry teacher’s position if both are relevant to the job. The changes emphasize that the mere possibility of qualifying for a position with an unrelated degree is insufficient, and specific degrees must impart highly specialized knowledge pertinent to the role.

    Amending the Criteria for Specialty Occupation Positions. USCIS is proposing updates to the criteria defining a “specialty occupation” under the Immigration and Nationality Act. This proposal includes a clarification of the term “normally,” which, in the context of a specialty occupation, indicates that a bachelor’s degree is typically, but not always, necessary for the profession. USCIS is aiming to standardize this term to reflect a type, standard, or regular pattern, reinforcing that the term “normally” does not equate to “always.”

    Extending F-1 Cap-Gap Protection. USCIS is proposing to revise the Cap-Gap provisions, which currently extend employment authorization for F-1 students awaiting H-1B visa approval until October 1 of the fiscal year for which H–1B visa classification has been requested. The Cap-Gap refers to the period between the end of an F-1 student’s Optional Practical Training (OPT) and the start of their H-1B status, which can lead to a gap in lawful status or employment authorization. The new proposal seeks to extend this period until April 1 of the fiscal year for which the H-1B visa is filed, or until the visa is approved, to better address processing delays and reduce the risk of employment authorization interruption. To be eligible, the H-1B petition must be legitimate and filed on time. This change is intended to support the U.S. in attracting and maintaining skilled international workers by providing a more reliable transition from student to professional status.

    Cap-Exempt Organizations. USCIS is redefining which employers are exempt from the H-1B visa cap. The proposed changes involve revising the definition of “nonprofit research organization” and “governmental research organization” from being “primarily engaged” in research to conducting research as a “fundamental activity.” This proposed change would enable organizations that might not focus primarily on research, but still fundamentally engage in such activities, to qualify for the exemption. Additionally, USCIS aims to accommodate beneficiaries not directly employed by a qualifying organization but who still perform essential, mission-critical work.

    Deference. USCIS is proposing to codify a policy of deference to prior adjudications of Form I-129 petitions, as delineated in the USCIS Policy Manual, mandating that officers give precedence to earlier decisions when the same parties and material facts recur. This proposal, however, includes stipulations that such deference is not required if there were material errors in the initial approval, if substantial changes in circumstances or eligibility have occurred, or if new and pertinent information emerges that could negatively influence the eligibility assessment.

    Next Steps

    While this summary captures key elements of the proposed changes, our members should be aware that the rule contains other important provisions that warrant careful review. These additional provisions could also significantly impact the H-1B visa program and its beneficiaries, and it is crucial for all interested parties to examine the proposed rule in its entirety to understand its full implications.

    USCIS is accepting public comment on its proposal through December 22, 2023. CUPA-HR is evaluating the proposed revisions and will be working with other higher education associations to submit comprehensive comments for the agency’s consideration. As USCIS moves towards finalizing the proposals within this rulemaking, potentially through one or more final rules depending on the availability of agency resources, CUPA-HR will keep its members informed of all significant updates and outcomes.



    Source link

  • DHS Announces Proposed Pilot Program for Non-E-Verify Employers to Use Remote I-9 Document Examination – CUPA-HR

    DHS Announces Proposed Pilot Program for Non-E-Verify Employers to Use Remote I-9 Document Examination – CUPA-HR

    by CUPA-HR | August 9, 2023

    On August 3, 2023, the Department of Homeland Security (DHS) published a notice in the Federal Register seeking comments on a potential pilot program to allow employers not enrolled in E-Verify to harness remote examination procedures for the Form I-9, Employment Eligibility Verification.

    Background

    DHS’s recent actions are built upon a series of moves aimed at modernizing and making more flexible the employment verification process. On July 25, 2023, the DHS rolled out a final rule enabling the Secretary of Homeland Security to authorize optional alternative examination practices for employers when inspecting an individual’s identity and employment authorization documents, as mandated by the Form I-9. The rule creates a framework under which DHS may implement permanent flexibilities under specified conditions, start pilot procedures with respect to the examination of documents, or react to crises similar to the COVID-19 pandemic.

    Alongside the final rule, DHS published a notice in the Federal Register authorizing a remote document examination procedure for employers who are participants in good standing in E-Verify and announced it would be disclosing details in the near future about a pilot program to a broader category of businesses.

    Key Highlights of the Proposed Non-E-Verify Remote Document Examination Pilot 

    DHS’s proposal primarily revolves around the following points:

    • Purpose: Immigration and Customs Enforcement (ICE) intends to gauge the security impact of remote verification compared to traditional in-person examination of the Form I99. This involves evaluating potential consequences like error rates, fraud and discriminatory practices.
    • Pilot Procedure: The new pilot program would mirror the already authorized alternative method for E-Verify employers, including aspects such as remote document inspection, document retention and anti-discrimination measures.
    • Eligibility: The pilot program is open to most employers unless they have more than 500 employees. However, E-Verify employers are excluded since DHS has already greenlit an alternative for them.
    • Application Process: Interested employers must fill out the draft application form, which DHS has made available online. This form captures details like company information, terms of participation, participant obligations, and more.
    • Information Collection: Employers wishing to join the pilot would be required to complete the formal application linked above. ICE would periodically seek data from these employers, such as the number of new hires or how many employees asked for a physical inspection.
    • Documentation: Participating companies must electronically store clear copies of all supporting documents provided by individuals for the Form I-9. They might also be required to undertake mandatory trainings for detecting fraudulent documents and preventing discrimination.
    • Onsite/Hybrid Employees: Companies might face restrictions or a set timeframe for onsite or hybrid employees, dictating when they must physically check the Form I-9 after the initial remote assessment.
    • Audits and Investigations: All employers, including pilot participants, are liable for audits and evaluations. DHS plans to contrast data from these assessments to discern any systemic differences between the new method and the traditional one.

    What’s Next: Seeking Public Comments by October 2 

    DHS is actively seeking feedback from the public regarding the proposed pilot and the draft application form. The department encourages stakeholders to consider and provide insights on the following points:

    • Practical Utility: Assess if the proposed information requirement is vital for the agency’s proper functioning and whether the data collected will be practically useful.
    • Accuracy and Validity: Analyze the agency’s estimation of the information collection’s burden, ensuring the methods and assumptions are valid.
    • Enhance Information Quality: Offer suggestions to improve the clarity, utility and overall quality of the data collected.
    • Minimize Collection Burden: Propose ways to ease the data collection process for respondents, exploring technological solutions such as electronic submissions.

    In light of this, CUPA-HR plans to carefully evaluate the notice and associated application. Based on its review, CUPA-HR is considering submitting comments to provide valuable insights to DHS. CUPA-HR will keep members apprised of any updates regarding this proposed pilot program and other changes to Form I-9 alternative examination procedures.



    Source link

  • ALP 2023: Another Successful Association Leadership Program Is in the Books – CUPA-HR

    ALP 2023: Another Successful Association Leadership Program Is in the Books – CUPA-HR

    by CUPA-HR | July 26, 2023

    This blog post was contributed by Jennifer Addleman, member of CUPA-HR’s Southern Region board of directors and HR director at Rollins College.

    And that’s a wrap on CUPA-HR’s 2023 Association Leadership Program (ALP) in Omaha, Nebraska! On July 13-14, leaders from CUPA-HR’s national, regional and chapter boards, as well as CUPA-HR’s corporate partners, gathered to discuss higher ed HR challenges, share successes, make connections and build relationships. I was fortunate to attend as a representative from the Southern Region board, and my mind is still reeling from two full days of content and networking with talented HR leaders from across the country. Here are some of my takeaways:

    • Lead with positivity, start with a win, and end with gratitude.
    • So much is happening on the regulatory and legislative front that will affect higher ed and the labor and employment landscape, and CUPA-HR is serving as the voice of higher ed on these issues with lawmakers.
    • The CUPA-HR Knowledge Center continues to be a go-to resource for all things higher ed HR. In addition to HR toolkits that are constantly being updated or added, you’ll also find DEI resources, e-learning courses, a job description index, CUPA-HR’s Higher Ed HR Magazine and more. If you haven’t checked out the Knowledge Center lately, I encourage you to do so!
    • We in higher ed HR are doing important work — what we do matters, and we are impacting lives.
    • CUPA-HR continues to do valuable work in data collection and research — our data is the platinum standard! Learn more about CUPA-HR’s research in the Research Center (find the link in the menu on the CUPA-HR home page).
    • We must continue to make mental health a priority. As HR practitioners, we often prioritize taking care of others, but we should not be ashamed to take care of ourselves first! Find resources in the Mental Health and Health and Well-Being Knowledge Center toolkits.
    • You can walk to Iowa from Omaha! Who knew!

    Sharing some quality time with higher ed HR peers from across the country, commiserating about and discussing strategies to overcome our biggest challenges, and meeting new people and making new connections is what CUPA-HR’s Association Leadership Program is all about. If you’re considering exploring volunteer leadership opportunities within the association, do it! You won’t regret it — in fact, you’re guaranteed to learn and grow, and have a great time doing it!



    Source link