Category: Featured

  • PeopleAdmin A PowerSchool Company

    PeopleAdmin A PowerSchool Company

    Tips and Best Practices for Higher Ed HR Compliance

    Compliance is a key and complicated part of the role of human resources teams, and this is especially true in higher education. The HigherEd industry is highly regulated, at both the state and federal levels, making the job even more complicated for HR teams that work on a college or university campus. Noncompliance, even when unintentional, can have serious negative consequences for an institution, ranging from legal and financial (including fines and penalties) to reputational. That’s why ensuring compliance is such an important part of the role of higher education human resources teams. Read on below for some top tips and best practices for ensuring higher ed HR compliance.

    Challenges faced by Higher Ed

    There are several HR challenges faced by HigherEd that are unique, making the world of compliance even more complicated. They include:

    1. Regulatory complexity: Higher education institutions must navigate a web of federal and state regulations, including Title IX, FLSA, and ADA, which can be particularly intricate in an academic setting. Teams must navigate these human resources rules, regulations, and procedures to remain compliant.
    2. Faculty and staff diversity: Ensuring compliance with equal employment opportunity laws while managing a diverse workforce of faculty and staff presents unique challenges.
    3. Student employment: Compliance with regulations related to student employment, such as work-study programs and internships, adds another layer of complexity.
    4. Varying types of employees: Colleges and universities may have to deal with different regulations for faculty, staff, part-time faculty, hourly workers, summer employees, and more—campuses have a greater variety of types of workers than many other organizations.

    Higher Education HR Compliance Best Practices

    1. Stay informed about regulations: Regularly monitor federal and state regulations to ensure compliance with labor laws, as they often evolve in response to changes in the workforce.
    2. Document policies and procedures: Properly document all company policies and procedures, and ensure easy access for employees. This includes creating an accessible and easy-to-navigate employee handbook.
    3. Regularly audit HR policies: Conduct regular HR audits to ensure that HR policies, such as leave policy, non-discrimination policy, and compensation policy, are compliant.
    4. Establish specialized HR departments: Consider establishing specialized HR departments within colleges to address theQuote: Modern, digitized workflows can streamline the hiring process, improve data security, and facilitate compliance with regulations. unique objectives of different divisions.
    5. Promote diversity, equity, and inclusion: Initiate regular conversations on campus among staff and departments to promote diversity and inclusion, enabling the institution to move into a more inclusive environment.
    6. Leverage technology: Use HR compliance software to track regulatory requirements and obligations. Modern, digitized workflows can streamline the hiring process, improve data security, and facilitate compliance with regulations. HR compliance software can centralize and automate compliance-related tasks, such as tracking employee certifications and managing leave policies, thereby reducing the risk of non-compliance.
    7. Standardization: Standardize hiring and interviewing procedures to ensure fair hiring.
    8. Training and education: Provide thorough orientation for new hires that includes their responsibilities for HR compliance and clearly explains policies for reporting noncompliance. Provide ongoing training to HR staff, faculty, and supervisors on compliance requirements and best practices. Create higher education compliance checklists to stay on top of things.
    9. Get leadership involved: Encourage executive leaders to champion ethics and compliance, and provide ways for employees to report unethical activity.
    10. Collaboration across departments: Encourage collaboration between HR, legal, and academic departments to ensure a comprehensive approach to compliance.

    The Role of Technology in Higher Ed HR Compliance

    When it comes to HigherEd HR compliance, the right technology is key. A platform built for your HR needs supports your team in so many ways, including:

    1. Efficiency and streamlining: Technology removes administrative burdens, eliminates duplicate processes, and centralizes information, making data insights more accessible. This streamlines communication, increases security, and automates tasks, thereby saving time for HR professionals.
    2. Data management: HR technology allows for the centralization and management of vast amounts of information related to faculty and staff recruitment, onboarding, compensation, performance management, and compliance training. This helps HR professionals find more insight into information like retention, growth, and historical data about positions and job duties, and keep that information secure and accessible for compliance purposes.
    3. Strategic role of HR: By leveraging technology, HR professionals can engage in more strategic work, such as employerQuote: By leveraging technology, HR professionals can engage in more strategic work. branding to attract talent, and providing insights into retention and growth, rather than being bogged down by manual processes and administrative tasks.
    4. Data-driven decision making: HR technology provides access to critical data, which is essential for financial forecasting, succession planning, and staff performance management, enabling HR teams to make informed, data-driven decisions.

    Luckily, PeopleAdmin has the technology your team needs to keep track of employee information, stay audit-ready, and manage your employees. Built just for HigherEd, PeopleAdmin’s tools have the customizable, flexible workflows you need to tackle any HR challenge. Check out:

    • Employee Records: With all documents in one portal and visibility into processes, you’ll ensure compliance and reduce time-consuming records management tasks. Plus, digital forms management means all faculty and staff have self-service, mobile-friendly access to HR forms like change-of-address to FMLA documents without requesting them in person or via email.
    • Applicant Tracking System: In our powerful ATS, real-time dashboards with easy-to-understand visuals make it easy to interpret your data. Standard reports help you stay EEO compliant and audit-ready based on federal and state regulations. Customizable reports can be automated so you can share information with key stakeholders on your own schedule.
    • Insights: Insights helps you uncover key insights into EEO compliance, budget planning, balanced hiring, faculty and staff hiring and retention, and more. And with automated reporting, you can easily schedule specific, easy-to-understand reports for institution leaders and key stakeholders — empowering data-based decision making across the institution.

    Source link

  • PeopleAdmin A PowerSchool Company

    PeopleAdmin A PowerSchool Company

    Navigating Change in Higher Education

    Change is a constant in higher education, and institutions are continually evolving to meet the demands of the modern world. In a recent PeopleAdmin webinar, Mastering Change Management in HigherEd’s Digital Transition, experts from Central Oregon Community College and Chapman University shared their experiences with change management during two large technology implementations, offering tips and best practices for other institutions anticipating change in the new year. In a poll at the start of the webinar, 95% of attendees responded that they would be facing a change in the new year. 31% are facing a major change, while 64% are navigating minor adjustments. If you’re among that 95%, read on below.

     

    Case Study: Central Oregon Community College.

    Laurel Kent, IT Project Manager at Central Oregon Community College, explored her team’s journey through a Performance Management upgrade that took place over the past year.

    Case Study Focus: Performance Review Transformation

    • Moving from manual, PDF-based processes to a digital platform within PeopleAdmin.
    • Addressing issues like inconsistency, versioning, and tracking associated with PDF processes.
    • Utilizing the PeopleAdmin portal to streamline performance evaluation tracking.

    Wins and Lessons Learned:

    • Leadership Buy-in and clear project vision: Project support from the CHRO and CIO helped provide the appropriate resources. Dedicated project managers and functional analyst team, working collaboratively with HR, oversaw project timelines and deliverables to keep things on track.
    • Clear project plan and frequent communication: Sharing the project progress and updates regularly across campus meant that end-users knew what to expect.
    • Clear Roles and Timelines: Regular and predictable working sessions, clearly defined roles, and a reasonable timeline for testing and implementation kept things moving forward.
    • Relationships matter: Make sure that you have users across campus who can answer questions and provide feedback.
    • Build in time to fine tune product: A lesson learned was to include extra time for testing and stakeholder feedback. The team found it was important to see the product live and get direct feedback, and then tweak the platform as necessary.

     

    Case Study 2: Chapman University

    Robin Borough, Director of Talent Acquisition at Chapman University, shared insights from her many experiences with change management—and her top tip was a formula.

    Change Management Formula from Beckhard and Harris: Change (C) = Dissatisfaction (A) * Desirability (B) * Practicality (D) > Perceived Cost (X)

    “This formula is old, but everybody will be able to relate to it and see that it’s a real quick and dirty way to see if you can get the funding, and the sponsorship that you need, or if you need to prove something to get that funding and sponsorship,” said Robin. “‘C’ is the change. ‘A’ is the level of dissatisfaction with the status quo, and ‘B’ is the desirability of the change or proposed end state. ‘D’ is the practicality of the change—so are the steps to make this change practical and are we minimizing risk and disruption as much as possible? ‘X’ is the perceived cost of the change. For change to make sense, A * B * D has to be greater than X—meaning, I have to have a lot of dissatisfaction and a lot of desire for something different, and the plan has to be practical. If AB, or D are zero, you’re out. Don’t even try to make the change. So much of what we’re doing is subjective, because there’s so many people and constituents involved with change management, so I thought this formula was an interesting way to think about it.”

    Final Thoughts

    In the ever-evolving landscape of higher education, change is inevitable. The experiences shared by Central Oregon Community College and Chapman University underscore the significance of proactive change management, user-centric approaches, the value of learning from past successes and challenges—and how important it is to understand what you’re getting into from the start. As institutions embark on their journeys of transformation, these insights can serve as guiding principles for navigating the complexities of change in higher education. For more, check out this webinar on-demand.

     

    Source link

  • Pay Equity Still Lags for Women Administrators – CUPA-HR

    Pay Equity Still Lags for Women Administrators – CUPA-HR

    by Julie Burrell | November 29, 2023

    An analysis of two decades worth of CUPA-HR data on gender and pay in higher ed administrative roles paints a troubling picture of pay equity. In 2022, women made up 51% of administrators in college and universities, but they were paid 93 cents for each dollar a man in an administrator position was paid. This represents an increase of just 3 cents from 2002, when women made 90 cents for each dollar a man was paid.

    Among chief human resources officers, the pay disparity is even wider. Though three in four (76%) of CHROs are women, their pay in 2022 was only 89 cents for each dollar male CHROs were paid. Deputy CHROs who are women were paid only 83 cents, a figure that remained unchanged from 2002 through 2022.

    The Higher Ed Administrators: Trends in Diversity and Pay Equity From 2002 to 2022 report also found that people of color — women especially — are increasingly represented in administrative positions. Drawing on 10 years of data, CUPA-HR found that between 2012 and 2022, the representation of people of color in higher ed administration increased by 41%. In 2012, people of color comprised 13% of administrators and in 2022, 18% of administrators. Women of color went from comprising 7% of higher ed administrators in 2012 to 10% of higher ed administrators in 2022.

    Despite these gains in representation, women of most races and ethnicities are still paid less than White men in the same administrator positions.

    The Report’s Major Findings Include:

    • The past 20 years saw an increase of 20% of women in administration, from 43% in 2022 to 51% in 2022, but pay equity for women has not kept pace. In 2002, women in administrator positions were paid 90 cents for each dollar men in administrator positions were paid. Two decades later, women in administrator positions were paid just 93 cents for each dollar men in administrator positions are. These wage gaps are not explained by the fact that women may have greater representation in lower-paying positions.
    • From 2012-2022, the representation of people of color in administrative roles increased by 41%. The biggest increases were among administrators of two or more races (290% increase) and Asian administrators (76%). Women of color have seen more than double the percentage increase in representation than men of color (54% increase for women versus 26% for men).
    • But people of color are still underrepresented in administrative positions. Using the percentage of people of color with U.S. graduate degrees (31%) as a comparison, we find that only 18% of higher ed administrators were people of color in 2022.
    • Women’s representation in executive roles increased, but pay inequity still exists. In 2022, women held one in three campus presidencies, an increase of 60% from 2002. In 2002, female presidents were paid 92 cents on the dollar to male presidents and saw only a 1-cent increase in the 20 years since. The worst pay equity for presidents was for Hispanic or Latina women, who were paid 82 cents per dollar paid to White men. In the same time span, the representation of women provosts increased, comprising nearly half (48%) of provosts in 2022. The gender pay gap narrowed as well: Female provosts were paid 91 cents on the dollar compared to male provosts in 2002, and in 2022, female provosts were paid 96 cents on the dollar compared to male provosts.
    • CHRO gender pay equity remains low. In 2022, three in four (76%) CHROs were women, with White women representing 60%. In 2002, female CHROs were paid 86 cents for each dollar male CHROs were paid. In 2022, female CHROs were paid only 89 cents for each dollar male CHROs were paid.

    Addressing the Administrative Pay Gap

    Addressing pay inequity and increasing the representation of people of color among higher ed administrators requires long-term solutions like conducting pay analyses. CUPA-HR’s DataOnDemand for the Administrators in Higher Education Survey features the most comprehensive data available on higher ed administrator salaries, as well as data on pay equity and representation for women and people of color for every administrative position.

    Recruiting a more diverse pool of faculty candidates and mitigating bias in faculty promotions are also important to succession planning, as one notable path to the presidency is to start off as a faculty member, ascend to dean, then to provost, and then to president.

    You also might consider what talent pipeline programs exist on your campus. For inspiration, see these models of internal talent development:



    Source link

  • How it Breaks in Subtle Ways –

    How it Breaks in Subtle Ways –

    In my last post, I explained how generative AI memory works and why it will always make mistakes without a fundamental change in its foundational technology. I also gave some tips for how to work around and deal with that problem to safely and productively incorporate imperfect AI into EdTech (and other uses). Today, I will draw on the memory issue I wrote about last time as a case study of why embracing our imperfect tools also means recognizing where they are likely to fail us and thinking hard about dealing realistically with their limitations.

    This is part of a larger series I’m starting on a term of art called “product/market fit.” The simplest explanation of the idea is the degree to which the thing you’re building is something people want and are willing to pay the cost for, monetary or otherwise. In practice, achieving product/market fit is complex, multifaceted, and hard. This is especially true in a sector like education, where different contextual details often create the need for niche products, where the buyer, adopter, and user of the product are not necessarily the same, and where measurable goals to optimize your product for are hard to find and often viewed with suspicion.

    Think about all the EdTech product categories that were supposed to be huge but disappointed expectations. MOOCs. Learning analytics. E-portfolios. Courseware platforms. And now, possibly OPMs. The list goes on. Why didn’t these product categories achieve the potential that we imagined for them? There is no one answer. It’s often in the small details specific to each situation. AI in action presents an interesting use case, partly because it’s unfolding right now, partly because it seems so easy, and partly because it’s odd and unpredictable, even to the experts. I have often written about “the miracle, the grind, and the wall” with AI. We will look at a couple of examples of moving from the miracle to the grind. These moments provide good lessons in the challenges of product/market fit.

    In my next post, I’ll examine product/market fit for universities in a changing landscape, focusing on applying CBE to an unusual test case. In the third post, I’ll explore product/market fit for EdTech interoperability standards and facilitating the growth of a healthier ecosystem.

    Khanmingo: the grind behind the product

    Khan Academy’s Kristen DiCerbo gave us all a great service by writing openly about the challenges of producing a good AI lesson plan generator. They started with prompt engineering. Well-written prompts are miracles. They’re like magic spells. Generating a detailed lesson plan in seconds with a well-written prompt is possible. But how good is that lesson plan? How well did Khanmingo’s early prompts produce the lesson plans?

    Kristen writes,

    At first glance, it wasn’t bad. It produced what looked to be a decent lesson plan—at least on the surface. However, on closer inspection, we saw some issues, including the following:

    • Lesson objectives just parroted the standard
    • Warmups did not consistently cover the most logical prerequisite skills
    • Incorrect answer keys for independent practice
    • Sections of the plan were unpredictable in length and format
    • The model seemed to sometimes ignore parts of the instructions in the prompt

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    You can’t tell the quality of the AI’s lesson plans without having experts examine them closely. You also want feedback from people who will actually use those lesson plans. I guarantee they will find problems that you will miss. Every time. Remember, the ultimate goal of product/market fit is to make something that the intended adopters will actually want. People will tolerate imperfections in a product. But which ones? What’s most important to them? How will they use the product? You can’t answer these questions confidently without the help of actual humans who would be using the product.

    At any rate, Khan Academy realized their early prompt engineering attempts had several shortcomings. Here’s the first:

    Khanmigo didn’t have enough information. There were too many undefined details for Khanmigo to infer and synthesize, such as state standards, target grade level, and prerequisites. Not to mention limits to Khanmigo’s subject matter expertise. This resulted in lesson plans that were too vague and/or inaccurate to provide significant value to teachers.

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    Read that passage carefully. With each type of information or expertise, ask yourself, “Where could I find that? Where is it written down in a form the AI can digest?” The answer is different for each one. How can the AI learn more about what state standards mean? Or about target grade levels? Prerequisites? Subject-matter expertise for each subject? No matter how much ChatGPT seems to know, it doesn’t know everything. And it is often completely ignorant about anything that isn’t well-documented on the internet. A human educator has to understand all these topics to write good lesson plans. A synthetic one does too. But a synthetic educator doesn’t have experience to draw on. It only has whatever human educators have publicly published about their experiences.

    Think about the effort involved in documenting all these various types of knowledge for a synthetic educator. (This, by the way, is very similar to why learning analytics disappointed as a product category. The software needs to know too much that wasn’t available in the systems to make sense of the data.)

    Here’s the second challenge that the Khanmingo team faced:

    We were trying to accomplish too much with a single prompt. The longer a prompt got and the more detailed its instructions were, the more likely it was that parts of the prompt would be ignored. Trying to produce a document as complex and nuanced as a comprehensive lesson plan with a single prompt invariably resulted in lesson plans with neglected, unfocused, or entirely missing parts.

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    I suspect this is a subtle manifestation of the memory problem I wrote about in my last post. Even with a relatively short text like a complex prompt, the AI couldn’t hold onto all the details. The Khanmingo team ended up breaking up the prompt into smaller pieces. This produced better results as the AI could “concentrate on”—or remember the details— one step at a time. I’ll add that this approach provides more opportunities to put humans in the loop. An expert—or a user—can examine and modify the output of each step.

    We fantasize about AI doing work for us. In some cases, it’s not just a fantasy. I use AI to be more productive literally every day. But it fails me often. We can’t know what it will take for AI to solve any particular problem without looking closely at the product’s capabilities and the user’s very specific needs. This is product/market fit.

    Learning design in the real world

    Developing skill in product/market fit is hard. Think about all those different topics the Khanmingo team needed to not only know, but know their relevance to creating lesson plans well enough to diagnose the gaps in the AI’s understanding.

    Refining a product is also inherently iterative. No matter how good you are at product designer how well you know your audience, and how brilliant you are, you will be wrong about some of your ideas early on. Because people are complicated. Organizations are complicated The skills workers need are often complicated and non-obvious. And the details of how the people need to work, individually and together, are often distinctive in ways that are invisible to them. Most people only know their own context. They take a lot for granted. Good product people spend their time uncovering these invisible assumptions and finding the commonalities and the differences. This is always a discovery process that takes time.

    Learning design is a classic case of this problem. People have been writing and adopting learning design methodologies longer than I’ve been alive. The ADDIE model—”Analyze, Design, Develop, Implement, and Evaluate”—was created by Florida State University for the military in the 1970s. “Backward Design” was invented in 1949 by Ralph W. Tyler. Over the past 30 years, I’ve seen a handful of learning design or instructional design tools that attempt to scaffold and enforce these and other design methodologies. I’ve yet to see one get widespread adoption. Why? Poor product/market fit.

    While the goal of learning design (or “instructional design,” to use the older term) is to produce a structured learning experience, the thought process of creating it is non-linear and iterative. As we develop and draft, we see areas that need tuning or improving. We move back and forth across the process. Nobody ever follows learning design methodologies strictly in practice. And I’m talking about trained learning design professionals. Untrained educators stray even further from the model. That’s why the two most popular learning design tools, by far, are Microsoft Word and Google Docs.

    If you’ve ever used ChatGPT and prompt engineering to generate the learning design of a complex lesson, you’ve probably run into unexpected limits to its usefulness. The longer you spend tinkering with the lesson, the more your results start to get worse rather than better. It’s the same problem the Khanmingo team had. Yes, ChatGPT and Claude can now have long conversations. But both research and experience show us that they tend to forget the stuff in the middle. By itself, ChatGPT is useful in lesson design to a point. But I find that when writing complex documents, I paste different pieces of my conversation into Word and stitch them together.

    And that’s OK. If that process saves me design time, that’s a win. But there are use cases where the memory problems are more serious in ways that I haven’t heard folks talking about yet.

    Combining documents

    Here’s a very common use case in learning design:

    First, you start with a draft of a lesson or a chapter that already exists. Maybe it’s a chapter from an OpenStax textbook. Maybe it’s a lesson that somebody on your team wrote a while ago that needs updating. You like it, but you don’t love it.

    You have an article with much of the information you want to add to the new version you want to create. If you were using a vendor’s textbook, you’d have to require the students to read the outdated lesson and then read the article separately. But this is content you’re allowed to revise. If you’re using the article in a way that doesn’t violate copyright—for example, because you’re using it to capture publicly known facts that have changed rather than something novel in the article itself—you can simply use the new information to revise the original lesson. That was often too much work the old way. But now we have ChatGPT, so, you know…magic.

    While you’re at it, you’d like to improve the lesson’s diversity, equity, and inclusion (DEI). You see opportunities to write the chapter in ways that represent more of your students and include examples relevant to their lived experiences. You happen to have a document with a good set of DEI guidelines.

    So you feed your original chapter, new article, and DEI guidelines to the AI. “ChatGPT, take the original lesson and update it with the new information from the article. Then apply the DEI guidelines, including examples in topics X, Y, and Z that represent different points of view. Abracadabra!”

    You can write a better prompt than this one. But no matter how carefully you engineer your prompt, you will be disappointed with the results. Don’t take my word for it. Try it yourself.

    Why does this happen? Because the generative AI doesn’t “remember” these three documents perfectly. Remember what I wrote in my last article:

    The LLMs can be “trained” on data, which means they store information like how “beans” vs. “water” modify the likely meaning of “cool,” what words are most likely to follow “Cool the pot off in the,” and so on. When you hear AI people talking about model “weights,” this is what they mean.

    Notice, however, that none of the original sentences are stored anywhere in their original form. If the LLM is trained on Wikipedia, it doesn’t memorize Wikipedia. It models the relationships among the words using combinations of vectors (or “matrices”) and probabilities. If you dig into the LLM looking for the original Wikipedia article, you won’t find it. Not exactly. The AI may become very good at capturing the gist of the article given enough billions of those tensor/workers. But the word-for-word article has been broken down and digested. It’s gone.

    How You Will Never Be Able to Trust Generative AI (and Why That’s OK)

    Your lesson and articles are gone. They’ve been digested. The AI remembers them, but it’s designed to remember the meaning, not the words. It’s not metaphorically sitting down with the original copy and figuring out where to insert new information or rewrite a paragraph. That may be fine. Maybe it will produce something better. But it’s a fundamentally different process than human editing. We won’t know if the results it generates have good product/market until we test it out with folks.

    To the degree that you need to preserve the fidelity of the original documents, you’ve got a problem. And the more you push generative AI to do this kind of fine-tuning work across multiple documents, the worse it gets. You’re running headlong into one of your synthetic co-worker’s fundamental limitations. Again, you might get enough value from it to achieve a net gain in productivity. But you might not because this seemingly simple use case is pushing hard on functionality that hasn’t been designed, tested, and hardened for this kind of use.

    Engineering around the problem

    Any product/market fit problem has two sides: product and market. On the market side, how good will be good enough? I’ve specifically positioned my ALDA project as producing a first draft with many opportunities for a human in the loop. This is a common approach we’re seeing in educational content generation right now, for good reasons. We’re reducing the risk to the students. Risk is one reason the market might reject the productket.

    Another is failing to deliver the promised time savings. If the combination of the documents is too far off from the humans’ goal, it will be rejected. Its speed will not make up for the time required for the human to fix its mistakes. We have to get as close to the human need as possible, mitigate the’ consequences, and test to see if we’ve achieved a cost/benefit for the users good enough that they will adopt the product.

    There is no perfect way to solve the memory problem. You will always need a human in the loop. But we could make a good step forward if we could get the designs solid enough to be directly imported into the learning platform and fine-tuned there, skipping the word processor step. Being able to do so requires tackling a host of problems, including (but not limited to) the memory issue. We don’t need the AI to get the combination of these documents perfect, but we do need it to get close enough that our users don’t need to dump the output into a full word processor to rewrite the draft.

    When I raised this problem to a colleague who is a digital humanities scholar and an expert in AI, he paused before replying. “Nobody is working on this kind of problem right now,” he said. “On one side, AI experts are experimenting with improving the base models. On the other side, I see articles all the time about how educators can write better prompts. Your problem falls in between those two.”

    Right. As a sector, we’re not discussing product/market fit for particular needs. The vendors are, each within their own circumscribed world. But on the customer side? I hear people tell me they’re conducting “experiments.” It sounds a bit like when university folk told me they were “working with learning analytics,” which turned out to mean that they were talking about working with learning analytics. I’m sure there are many prompt engineering workshops and many grants being written for fancy AI solutions that sound attractive to the National Science Foundation or whoever the grantor happens to be. But in the middle ground? Making AI usable to solve specific problems? I’m not seeing much of that yet.

    The document combination problem can likely be addressed adequately well through a combination of approaches that improve the product and mitigate the consequences of the imperfections to make them more tolerable for the market. After consulting with some experts, I’ve come up with a combination of approaches to try first. Technologically, I know it will work. It doesn’t depend on cutting-edge developments. Will the market accept the results? Will the new approach be better than the old one? Or will it trip over some deal-breaker, like so many products before it?

    I don’t know. I feel pretty good about my hypothesis. But I won’t know until real learning designers test it on real projects.

    We have a dearth of practical, medium-difficulty experiments with real users right now. That is a big, big problem. It doesn’t matter how impressive the technology is if its capabilities aren’t the ones the users need to solve real-world problems. You can’t fix this gap with symposia, research grants, or even EdTech companies that have the skills but not necessarily the platform or business model you need.

    The only way to do it is to get down into the weeds. Try to solve practical problems. Get real humans to tell you what does and doesn’t work for them in your first, second, third, fourth, and fifth tries. That’s what the ALDA project is all about. It’s not primarily about the end product. I am hopeful that ALDA itself will prove to be useful. But I’m not doing it because I want to commercialize a product. I’m doing it to teach and learn about product/market fit skills with AI in education. We need many more experiments like this.

    We put too much faith in the miracle, forgetting the grind and the wall are out there waiting for us. Folks in the education sector spend too much time staring at the sky, waiting for the EdTech space aliens to come and take us all to paradise,.

    I suggest that at least some of us should focus on solving today’s problems with today’s technology, getting it done today, while we wait for the aliens to arrive.

    Source link

  • Reflections from the Higher Education for Good Book Release Celebration – Teaching in Higher Ed

    Reflections from the Higher Education for Good Book Release Celebration – Teaching in Higher Ed

    What a way to start my week!

    November 20, 2023, I attended an online launch celebration event for a magnificent project. The book Higher Education for Good: Teaching and Learning Futures brought together 71 authors around the globe to create 27 chapters, as well as multiple pieces of artwork and poetry. Editors Laura Czerniewicz and Catherine Cronin shared their reflections of writing the book and invited chapter authors, and Larry Onokpite, the book’s editor, to celebrate the release and opportunities for collaboration. In total, the work represents contributions from 29 countries from six continents. Laura Czerniewicz was invited to talk about the book by the Academy of Science of South Africa (ASSAf), where she describes the values of inclusion woven throughout this project.

    Higher Ed for Good Aims

    At Monday’s book launch, Laura shared how the authors aimed to write about the tenants that were directed toward the greater aims of the book. Catherine described the call for authors to engage in this project, such that the resulting collection would help people:

    • Acknowledge despair
    • Engage in resistance
    • Imagine alternative futures and…
    • Foster hope and courage

    Laura stressed the way articulating what we stand for and not simply what we are against is essential in facilitating systemic change. Quoting Ruha Benjamin, Laura described ways to courageously imagine the future:

    Only by shifting our imagination, can we begin to think of a world that is more egalitarian, less extractive, and more habitable for everyone not just a small elite.

    It was wonderful to see the community who showed up to help celebrate this magnificent accomplishment. Toward the end of the conversations, someone asked about what might be next for this movement. Frances Bell responded by joking that she wasn’t sure she was necessarily going to answer the question, as she is prone to do. Instead, she described her use of ‘a slow ontology,’ a phrase which quickly resonated with me, even thought I didn’t know exactly what it meant.

    In some brief searching, I discovered a bit more about slow ontology. My novice understanding is that slow ontology asks the question of what lives might look like, were we to live them slowly and resist the socialization of speed as productivity and self-worth. Ulmer offers a look at a slow ontology for writing, while Mol uses slowness to analyze archeological artifacts. One piece I absolutely want to revisit is Mark Carrigan’s Beyond fast and slow: temporal ontology in critical higher education scholarship

    Next Steps

    I’ll have the honor, soon, of interviewing Laura and Catherine for the Teaching in Higher Ed podcast. I’m ~30% through Higher Education for Good and am glad I don’t have to rush through the reading too quickly. I mentioned as a few of us remained online together after the book release celebration that reading Higher Education for Good and Dave Cormier’s forthcoming Learning in a Time of Abundance has been an interesting juxtaposition. Rissa Sorensen-Unruh described a similar serendipity of reading Belonging, by Geoffrey Cohen at the same time as Rebecca Pope-Ruark’s Unraveling Faculty Burnout. After skimming the book description of Belonging, I instantly bought it… adding it to the quite-long digital to-read stack. I suppose that while I struggle with slowing down, that challenge doesn’t apply when it comes to my reading practice.

    Resources:

    Source link

  • How You Will Never Be Able to Trust Generative AI (and Why That’s OK) –

    How You Will Never Be Able to Trust Generative AI (and Why That’s OK) –

    In my last post, I introduced the idea of thinking about different generative AI models as coworkers with varying abilities as a way to develop a more intuitive grasp of how to interact with them. I described how I work with my colleagues Steve ChatGPT, Claude Anthropic, and Anna Bard. This analogy can hold (to a point) even in the face of change. For example, in the week since I wrote that post, it appears that Steve has finished his dissertation, which means that he’s catching up on current events to be more like Anna and has more time for long discussions like Claude. Nevertheless, both people and technologies have fundamental limits to their growth.

    In this post, I will explain “hallucination” and other memory problems with generative AI. This is one of my longer ones; I will take a deep dive to help you sharpen your intuitions and tune your expectations. But if you’re not up for the whole ride, here’s the short version:

    Hallucinations and imperfect memory problems are fundamental consequences of the architecture that makes current large language models possible. While these problems can be reduced, they will never go away. AI based on today’s transformer technology will never have the kind of photographic memory a relational database or file system can have. When vendors tout that you can now “talk to your data,” they really mean talk to Steve, who has looked at your data and mostly remembers it.

    You should also know that the easiest way to mitigate this problem is to throw a lot of carbon-producing energy and microchip-cooling water at it. Microsoft is literally considering building nuclear reactors to power its AI. Their global water consumption post-AI has spiked 34% to 1.7 billion gallons.

    This brings us back to the coworker analogy. We know how to evaluate and work with our coworkers’ limitations. And sometimes, we decide not to work with someone or hire them for a particular job because the fit is not good.

    While anthropomorphizing our technology too much can lead us astray, it can also provide us with a robust set of intuitions and tools we already have in our mental toolboxes. As my science geek friends say, “All models are wrong, but some are useful.” Combining those models or analogies with an understanding of where they diverge from reality can help you clear away the fear and the hype to make clear-eyed decisions about how to use the technology.

    I’ll end with some education-specific examples to help you determine how much you trust your synthetic coworkers with various tasks.

    Now we dive into the deep end of the pool. When working on various AI projects with my clients, I have found that this level of understanding is worth the investment for them because it provides a practical framework for designing and evaluating immediate AI applications.

    Are you ready to go?

    How computers “think”

    About 50 years ago, scholars debated whether and in what sense machines could achieve “intelligence,” even in principle. Most thought they could eventually sound pretty clever and act rather human. But could they become sentient? Conscious? Do intelligence and competence live as “software” in the brain that could be duplicated in silicon? Or is there something about them that is fundamentally connected to the biological aspects of the brain? While this debate isn’t quite the same as the one we have today around AI, it does have relevance. Even in our case, where the questions we’re considering are less lofty, the discussions from back then are helpful.

    Philosopher John Searle famously argued against strong AI in an argument called “The Chinese Room.” Here’s the essence of it:

    Imagine sitting in a room with two slots: one for incoming messages and one for outgoing replies. You don’t understand Chinese, but you have an extensive rule book written in English. This book tells you exactly how to respond to Chinese characters that come through the incoming slot. You follow the instructions meticulously, finding the correct responses and sending them out through the outgoing slot. To an outside observer, it looks like you understand Chinese because the replies are accurate. But here’s the catch: you’re just following a set of rules without actually grasping the meaning of the symbols you’re manipulating.

    This is a nicely compact and intuitive explanation of rule-following computation. Is the person outside the room speaking to something that understands Chinese? If so, what is it? Is it the man? No, we’ve already decided he doesn’t understand Chinese. Is it the book? We generally don’t say books understand anything. Is it the man/book combination? That seems weird, and it also doesn’t account for the response. We still have to put the message through the slot. Is it the man/book/room? Where is the “understanding” located? Remember, the person on the other side of the slot can converse perfectly in Chinese with the man/book/room. But where is the fluent Chinese speaker in this picture?

    If we carry that idea forward to today, however much “Steve” may seem fluent and intelligent in your “conversations,” you should not forget that you’re talking to man/book/room.

    Well. Sort of. AI has changed since 1980.

    How AI “thinks”

    Searle’s Chinese room book evokes algorithms. Recipes. For every input, there is one recipe for the perfect output. All recipes are contained in a single bound book. Large language models (LLMs)—the basics for both generative AI and semantic search like Google—work somewhat differently. They are still Chinese rooms. But they’re a lot more crowded.

    The first thing to understand is that, like the book in the Chinese room, a large language model is a large model of a language. LLMs don’t even “understand” English (or any other language) at all. It converts words into its native language: Math.

    (Don’t worry if you don’t understand the next few sentences. I’ll unpack the jargon. Hang in there.)

    Specifically, LLMs use vectors. Many vectors. And those vectors are managed by many different “tensors,” which are computational units you can think of as people in the room handling portions of the recipe. They do each get to exercise a little bit of judgment. But just a little bit.

    Suppose the card that came in the slot of the room had the English word “cool” on it. The room has not just a single worker but billions, or tens of billions, or hundreds of billions of them. (These are the tensors.) One worker has to rate the word on a scale of 10 to -10 on where “cool” falls on the scale between “hot” and “cold.” It doesn’t know what any of these words mean. It just knows that “cool” is a -7 on that scale. (This is the “vector.”) Maybe that worker, or maybe another one, also has to evaluate where it is on the scale of “good” to “bad.” It’s maybe 5.

    We don’t yet know whether the word “cool” on the card refers to temperature or sentiment. So another worker looks at the word that comes next. If the next word is “beans,” then it assigns a higher probability that “cool” is on the “good/bad” scale. If it’s “water,” on the other hand, it’s more likely to be temperature. If the next word is “your,” it could be either, but we can begin to guess the next word. That guess might be assigned to another tensor/worker.

    Imagine this room filled with a bazillion workers, each responsible for scoring vectors and assigning probabilities. The worker who handles temperature might think there’s a 50/50 chance the word is temperature-related. But once we add “water,” all the other workers who touch the card know there’s a higher chance the word relates to temperature rather than goodness.

    The large language models behind ChatGPT have hundreds of billions of these tensor/workers handing off cards to each other and building a response.

    This is an oversimplification because both the tensors and the math are hard to get exactly right in the analogy. For example, it might be more accurate to think of the tensors working in groups to make these decisions. But the analogy is close enough for our purposes. (“All models are wrong, but some are useful.”)

    It doesn’t seem like it should work, does it? But it does, partly because of brute force. As I said, the bigger LLMs have hundreds of billions of workers interacting with each other in complex, specialized ways. Even though they don’t represent words and sentences in any form that we might intuitively recognize as “understanding,” they are uncannily good at interpreting our input and generating output that looks like understanding and thought to us.

    How LLMs “remember”

    The LLMs can be “trained” on data, which means they store information like how “beans” vs. “water” modify the likely meaning of “cool,” what words are most likely to follow “Cool the pot off in the,” and so on. When you hear AI people talking about model “weights,” this is what they mean.

    Notice, however, that none of the original sentences are stored anywhere in their original form. If the LLM is trained on Wikipedia, it doesn’t memorize Wikipedia. It models the relationships among the words using combinations of vectors (or “matrices”) and probabilities. If you dig into the LLM looking for the original Wikipedia article, you won’t find it. Not exactly. The AI may become very good at capturing the gist of the article given enough billions of those tensor/workers. But the word-for-word article has been broken down and digested. It’s gone.

    Three main techniques are available to work around this problem. The first, which I’ve written about before, is called Retrieval Augmented Generation (RAG). RAG preprocesses content into the vectors and probabilities that the LLM understands. This gives the LLM a more specific focus on the content you care about. But it’s still been digested into vectors and probabilities. A second method is to “fine-tune” the model. Which predigests the content like RAG but lets the model itself metabolize that content. The third is to increase what’s known as the “context window,” which you experience as the length of a single conversation. If the context window is long enough, you can paste the content right into it…and have the system digest the content and turn it into vectors and probabilities.

    We’re used to software that uses file systems and databases with photographic memories. LLMs are (somewhat) more like humans in the sense that they can “learn” by indexing salient features and connecting them in complex ways. They might be able to “remember” a passage, but they can also forget or misremember.

    The memory limitation cannot be fixed using current technology. It is baked into the structure of the tensor-based networks that make LLMs possible. If you want a photographic memory, you’d have to avoid passing through the LLM since it only “understands” vectors and probabilities. To be fair, work is being done to reduce hallucinations. This paper provides a great survey. Don’t worry if it’s a bit technical. The informative part for a non-technical reader is all the different classifications of “hallucinations.” Generative AI has a variety of memory problems. Research is underway to mitigate them. But we don’t know how far those techniques will get us, given the fundamental architecture of large language models.

    We can mitigate these problems by improving the three methods I described. But that improvement comes with two catches. The first is that it will never make the system perfect. The second is that reduced imperfection often requires more energy for the increased computing power and more water to cool the processors. The race for larger, more perfect LLMs is terrible for the environment. And we may not need that extra power and fidelity except for specialized applications. We haven’t even begun to capitalize on its current capabilities. We should consider our goals and whether the costliest improvements are the ones we need right now.

    To do that, we need to reframe how we think of these tools. For example, the word “hallucination” is loaded. Can we more easily imagine working with a generative AI that “misremembers”? Can we accept that it “misremembers” differently than humans do? And can we build productive working relationships with our synthetic coworkers while accommodating and accounting for their differences?

    Here too, the analogy is far from perfect. Generative AIs aren’t people. They don’t fit the intention of diversity, equity, and inclusion (DEI) guidelines. I am not campaigning for AI equity. That said, DEI is not only about social justice. It is also about how we throw away human potential when we choose to focus on particular differences and frame them as “deficits” rather than recognizing the strengths that come from a diverse team with complementary strengths.

    Here, the analogy holds. Bringing a generative AI into your team is a little bit like hiring a space alien. Sometimes it demonstrates surprising unhuman-like behaviors, but it’s human-like enough that we can draw on our experiences working with different kinds of humans to help us integrate our alien coworker into the team.

    That process starts with trying to understand their differences, though it doesn’t end there.

    Emergence and the illusion of intelligence

    To get the most out of our generative AI, we have to maintain a double vision of experiencing the interaction with the Chinese room from the outside while picturing what’s happening inside as best we can. It’s easy to forget the uncannily good, even “thoughtful” and “creative” answers we get from generative AI are produced by a system of vectors and probabilities like the one I described. How does that work? What could possibly going on inside the room to produce such results?

    AI researchers talk about “emergence” and “emergent properties.” This idea has been frequently observed in biology. The best, most accessible exploration of it that I’m aware of (and a great read) is Steven Johnson’s book Emergence: The Connected Lives of Ants, Brains, Cities, and Software. The example you’re probably most familiar with is ant colonies (although slime molds are surprisingly interesting).

    Imagine a single ant, an explorer venturing into the unknown for sustenance. As it scuttles across the terrain, it leaves a faint trace, a chemical scent known as a pheromone. This trail, barely noticeable at first, is the starting point of what will become colony-wide coordinated activity.

    Soon, the ant stumbles upon a food source. It returns to the nest, and as it retraces its path, the pheromone trail becomes more robust and distinct. Back at the colony, this scented path now whispers a message to other ants: “Follow me; there’s food this way!” We might imagine this strengthened trail as an increased probability that the path is relevant for finding food. Each ant is acting independently. But it does so influenced by pheromone input left by other ants and leaves output for the ants that follow.

    What happens next is a beautiful example of emergent behavior. Other ants, in their own random searches, encounter this scent path. They follow it, reinforcing the trail with their own pheromones if they find food. As more ants travel back and forth, a once-faint trail transforms into a bustling highway, a direct line from the nest to the food.

    But the really amazing part lies in how this path evolves. Initially, several trails might have been formed, heading in various directions toward various food sources. Over time, a standout emerges – the shortest, most efficient route. It’s not the product of any single ant’s decision. Each one is just doing its job, minding its own business. The collective optimization is an emergent phenomenon. The shorter the path, the quicker the ants can travel, reinforcing the most efficient route more frequently.

    This efficiency isn’t static; it’s adaptable. If an obstacle arises, disrupting the established path, the ants don’t falter. They begin exploring again, laying down fresh trails. Before long, a new optimal path emerges, skirting the obstacle as the colony dynamically adjusts to its changing environment.

    This is a story of collective intelligence, emerging not from a central command but from the sum of many small, individual actions. It’s also a kind of Chinese room. When we say “collective intelligence,” where does the intelligence live? What is the collective thing? The hive? The hive-and-trails? And in what sense is it intelligent?

    We can make a (very) loose analogy between LLMs being trained and hundreds of billions of ants laying down pheromone trails as they explore the content terrain they find themselves in. When they’re asked to generate content, it’s a little bit like sending you down a particular pheromone path. This process of leading you down paths that were created during the AI model’s training is called “inference” in the LLM. The energy required to send you down an established path is much less than the energy needed to find the paths. Once the paths are established, traversing them seems like science fiction. The LLM acts as if there is a single adaptive intelligence at work even though, inside the Chinese room, there is no such thing. Capabilities emerge from the patterns that all those independent workers are creating together.

    Again, all models are wrong, but some are useful. My analogy substantially oversimplifies how LLMs work and how surprising behaviors emerge from those many billions of workers, each doing its own thing. The truth is that even the people who build LLMs don’t fully understand their emergent behaviors.

    That said, understanding the basic mechanism is helpful because it provides a reality check and some insight into why “Steve” just did something really weird. Just as transformer networks produce surprisingly good but imperfect “memories” of the content they’re given, we should expect to hit limits to gains from emergent behaviors. While our synthetic coworkers are getting smarter in somewhat unpredictable ways, emergence isn’t magic. It’s a mechanism driven by certain kinds of complexity. It is unpredictable. And not always in the way that we want it to be.

    Also, all that complexity comes at a cost. A dollar cost, a carbon cost, a water cost, a manageability cost, and an understandability cost. The default path we’re on is to build ever-bigger models with diminishing returns at enormous societal costs. We shouldn’t let our fear of the technology’s limitations or fantasy about its future perfection dominate our thinking about the tech.

    Instead, we should all try to understand it as it is, as best we can, and focus on using it safely and effectively. I’m not calling for a halt to research, as some have. I’m simply saying we may gain a lot more at this moment by better understanding the useful thing that we have created than by rushing to turn it into some other thing that we fantasize about but don’t know that we actually need or want in real life.

    Generative AI is incredibly useful right now. And the pace at which we are learning to gain practical benefit from it is lagging further and further behind the features that the tech giants are building as they race for “dominance,” whatever that may mean in this case.

    Learning to love your imperfect synthetic coworker

    Imagine you’re running a tutoring program. Your tutors are students. They are not perfect. They might not know the content as well as the teacher. They might know it very well but are weak as educators. Maybe they’re good at both but forget or misremember essential details. That might cause them to give the students they are tutoring the wrong instructions.

    When you hire your human tutors, you have to interview and test them to make sure they are good enough for the tasks you need them to perform. You may test them by pretending to be a challenging student. You’ll probably observe them and coach them. And you may choose to match particular tutors to particular subjects or students. You’d go through similar interviewing, evaluation, job matching, and ongoing supervision and coaching with any worker performing an important job.

    It is not so different when evaluating a generative AI based on LLM transformer technology (which is all of them at the moment). You can learn most of what you need to know from an “outside-the-room” evaluation using familiar techniques. The “inside-the-room” knowledge helps you ground yourself when you hear the hype or see the technology do remarkable things. This inside/outside duality is a major component that participating teams in my AI Learning Design Workshop (ALDA) design/build exercise will be exploring and honing their intuitions about with a practical, hands-on project. The best way to learn how to manage student tutors is by managing student tutors.

    Make no mistake: Generative AI does remarkable things and is getting better. But ultimately, it’s a tool built by humans and has fundamental limitations. Be surprised. Be amazed. Be delighted. But don’t be fooled. The tools we make are as imperfect as their creators. And they are also different from us.

    Source link

  • Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    by CUPA-HR | November 15, 2023

    A growing interest in artificial intelligence and its potential impact on the workforce has sparked action by policymakers at the federal level. As employers increasingly turn to AI to fill workforce gaps, as well as improve hiring and overall job quality, policymakers are seeking federal policies to better understand the use and development of the technology. Recent policies include an executive order from the Biden administration and a Senate committee hearing on AI, both of which are detailed below.

    Executive Order on AI Use and Deployment

    On October 30, the Biden Administration released an executive order delineating the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order urges responsible AI deployment that satisfies workforce development needs and ethical considerations.

    The executive order directs several agency heads to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Some orders of particular interest to higher education HR include:

    • The secretary of labor is directed to submit a report analyzing ways agencies can support workers who may be displaced by AI.
    • The secretaries of labor, education and commerce are directed to expand education and training opportunities to provide pathways to careers related to AI.
    • The secretary of labor is ordered to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.
    • The secretary of labor is directed to issue guidance clarifying that employers using AI to monitor employees’ work are required to comply with protections that ensure workers are compensated for hours worked as defined under the Fair Labor Standards Act.
    • The secretary of labor is directed to publish guidance for federal contractors on nondiscrimination in hiring practices that involve the use of AI and other technology.
    • The director of the National Science Foundation is directed to “prioritize available resources to support AI-related education and AI-related workforce development through existing programs.”
    • The secretary of education is ordered to develop resources and guidance regarding AI, including resources addressing “safe, responsible and nondiscriminatory uses of AI in education.”
    • The secretary of state is ordered to establish a program to “identify and attract top talent in AI and other critical and emerging technologies at universities [and] research institutions” and “to increase connections with that talent to educate them on opportunities and resources for research and employment in the United States.”
    • The secretary of homeland security is directed to continue its rulemaking process to modernize the H-1B program and to consider a rulemaking that would ease the process of adjusting noncitizens’ status to lawful permanent resident status if they are experts in AI and other emerging technologies.

    The executive order directs the agency heads to produce their respective guidance and resources within the next year. As these policies and resources begin to roll out, CUPA-HR will keep members updated on any new obligations or requirements related to AI.

    Senate HELP Committee Hearing on AI and the Future of Work

    On October 31, 2023 the Senate Employment and Workplace Safety Subcommittee held a hearing titled “AI and the Future of Work: Moving Forward Together.” The hearing provided policymakers and witnesses the opportunity to discuss the use of AI as a complementary tool in the workforce to skill and reskill American workers and help them remain a valuable asset to the labor market.

    Democrats and Republicans on the committee agreed that AI has the potential to alter the workforce in positive ways but that the growth of the use of the technology needs to be supported by a framework of regulations that do not smother its potential. According to witnesses, employers using AI currently face a patchwork of state and local laws that complicate the responsible use and growth of AI technologies. They argued that a federal framework to address the safe, responsible use of AI could help employers avoid such complications and allow AI use to continue to grow.

    Democrats on the committee also asked whether education opportunities and skills-based training on AI can help provide an employment pathway for workers. Witnesses argued that AI education is needed at the elementary and secondary level to ensure future workers are equipped with the skills needed to work with AI, and that skills-based training models to reskill workers have proven successful.

    CUPA-HR will continue to track any developments in federal AI regulations and programs and will inform members of updates.



    Source link

  • Senate Finance Committee Holds Hearing on Paid Leave – CUPA-HR

    Senate Finance Committee Holds Hearing on Paid Leave – CUPA-HR

    by CUPA-HR | November 14, 2023

    On October 25, the Senate Finance Committee held a hearing on federal paid leave. This comes as congressional Democrats and Republicans have shown interest in finding bipartisan consensus for a federal paid leave program. The hearing also provided policymakers and witnesses the opportunity to discuss the promise and drawbacks of paid leave proposals.

    Increasing employee access to paid leave was a primary focus of the hearing. Both sides of the aisle agreed that all workers will need to take leave during their careers without the obligation to juggle work requirements. Policymakers highlighted that 70 percent of Americans want national paid leave and that 72 percent of Americans who are not currently working cite caregiving and family responsibilities as the main reason. To address these issues, Democrats argued for a federally mandated paid leave program, while Republicans worried that a one-size-fits-all program could limit employer-provided paid leave options and be difficult to implement on a wide scale.

    Witnesses Describe Potential Benefits of Federal Paid Leave

    Some of the witnesses discussed the benefits of a federal paid leave program, concluding that better access to paid leave would benefit workers, employers and the economy. Jocelyn Frye, president of the National Partnership for Women & Families, stated that offering paid leave tends to benefit both workers and employers through increased labor force participation (both for women and generally), worker retention, and wage growth. Ben Verhoeven, president of Peoria Gardens Inc., added that investing in paid leave gave him better return on investment than his capital investments, as implementing paid leave increased business growth and employee retention and promotions.

    Objection to a One-Size-Fits-All Leave Program

    Despite these benefits, Elizabeth Milito, executive director of the National Federation of Independent Business’s Small Business Legal Center, said that employers would face trade-offs under a federal paid leave program. Milito argued that employers operating on the same amount of funds but under new federal benefit requirements would be obliged to provide paid leave as a benefit, leading to some employers being unable to provide higher compensation or other benefits like health insurance. Rachel Greszler, senior research fellow at The Heritage Foundation, said that in response to state paid leave programs, some companies choose to send workers to the state program first and then supplement the paid leave benefit to provide 100 percent wage replacement. This creates an administrative burden for the employee, who receives full wage replacement only if they participate in both paid leave programs.

    Republicans and their witnesses also said that a federal program would require flexibility and simplicity to be most effective. Milito and Greszler concurred that most small businesses do not have a qualified HR professional to deal with additional compliance needs. Greszler also stated that the biggest unintended consequence of a one-size-fits-all approach would be a rigid structure that does not work for most employees and businesses. She specified that a carve-out for small businesses or the ability to opt in to a federal program would be most appropriate.

    CUPA-HR continues to monitor for any updates on federal paid leave programs and will keep members apprised of any new developments.



    Source link