Category: Artificial Intelligence

  • HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    Good evening,

    In my last AI blog, I wrote about the recent launch of the Canadian AI Safety Institute, and other AISIs around the world. I also mentioned that I was looking forward to learn more about what would be discussed during the International Network for AI Safety meeting that would take place on November 20th-21st.

    Well, here’s the gist of it. Representatives from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US gathered last week in San Francisco to “help drive technical alignment on AI safety research, testing and guidance”. They identified their first four areas of priority:

    • Research: We plan, together with the scientific community, to advance research on risks and capabilities of advanced AI systems as well as to share the most relevant results, as appropriate, from research that advances the science of AI safety.
    • Testing: We plan to work towards building common best practices for testing advanced AI systems. This work may include conducting joint testing exercises and sharing results from domestic evaluations, as appropriate.
    • Guidance: We plan to facilitate shared approaches such as interpreting tests of advanced systems, where appropriate.
    • Inclusion: We plan to actively engage countries, partners, and stakeholders in all regions of the world and at all levels of development by sharing information and technical tools in an accessible and collaborative manner, where appropriate. We hope, through these actions, to increase the capacity for a diverse range of actors to participate in the science and practice of AI safety. Through this Network, we are dedicated to collaborating broadly with partners to ensure that safe, secure, and trustworthy AI benefits all of humanity.

    Cool. I mean, of course these priority areas are all key to the work that needs to be done… But the network does not provide concrete details on how it actuallyplans to fulfill these priority areas. I guess now we’ll just have to wait and see what actually comes out of it all.

    On another note – earlier in the Fall, one of our readers asked us if we had any thoughts about how a win from the Conservatives in the next federal election could impact the future of AI in the country. While I unfortunately do not own a crystal ball, let me share a few preliminary thoughts. 

    In May 2024, the House of Commons released the Report of the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities regarding the Implications of Artificial Intelligence Technologies for the Canadian Labour Force.

    TL;DR, the recommendations of the Standing Committee notably include: to review federal labour legislation to protect diverse workers’ rights and privacy; to collaborate with provinces, territories and labour representatives to develop a framework to support ethical adoption of AI in workplaces; to invest in AI skills training; to offer financial support to SMEs and non-profits for AI adoption; to investigate ways to utilize AI to increase operational efficiency and productivity; and for Statistics Canada to monitor labour market impacts of AI over time.

    Honestly – these are quite respectable recommendations, that could lead to significant improvements around AI implementation if they were to be followed through. 

    Going back to the question about the Conservatives, then… The Standing Committee report includes a Dissenting Report from the Conservative Party, which states that the report “does not go sufficiently in depth in how the lack of action concerning these topics [regulations around privacy, the poor state of productivity and innovation and how AI can be used to boost efficiencies, etc.] creates challenges to our ability to manage AI’s impact on the Canadian workforce”. In short, it says do more – without giving any recommendation whatsoever about what that more should be.

    On the other side, we know that one of the reasons why Bill C-27 is stagnating is because of oppositions. The Conservatives notably accused the Liberal government of seeking to “censor the Internet” – the Conservatives are opposed to governmental influence (i.e., regulation) on what can or can’t be posted online. But we also know that one significant risk of the rise of AI is the growth of disinformation, deepfakes, and more. So… maybe a certain level of “quality control” or fact-checking would be a good thing? 

    All in all, it seems like Conservatives would in theory support a growing use of AI to fight against Canada’s productivity crisis and reduce red tape. In another post previously this year, Alex has also already talked about what a Poilievre Government science policy could look like, and we both agree that the Conservatives at least appear to be committed to investing in technology. However, how they would plan to regulate the tech to ensure ethical use remains to be seen. If you have any more thoughts on that, though, I’d love to hear them. Leave a comment or send me a quick email!

    And if you want to continue discussing Canada’s role in the future of AI, make sure to register to HESA’s AI-CADEMY so you do not miss our panel “Canada’s Policy Response to AI”, where we’ll have the pleasure of welcoming Rajan Sawhney, Minister of Advanced Education (Government of Alberta), Mark Schaan, Deputy Secretary to the Cabinet on AI (Government of Canada), and Elissa Strome, Executive Director of the Pan-Canadian AI Strategy (CIFAR), and where we’ll discuss all things along the lines of what should governments’ role be in shaping the development of AI?.

    Enjoy the rest of your week-end, all!

    – Sandrine Desforges, Research Associate

    sdesforges@higheredstrategy.com 

    Source link

  • Department of Labor Publishes AI Framework for Hiring Practices

    Department of Labor Publishes AI Framework for Hiring Practices

    by CUPA-HR | October 16, 2024

    On September 24, the Department of Labor (DOL), along with the Partnership on Employment & Accessible Technology (PEAT), published the AI & Inclusive Hiring Framework. The framework is intended to be a tool to support the inclusive use of artificial intelligence in employers’ hiring technology, specifically for job seekers with disabilities.

    According to DOL, the framework was created in support of the Biden administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Issued in October 2023, the executive order directed the Secretary of Labor, along with other federal agency officials, to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Notably, it also directed DOL to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.

    The new AI Framework includes 10 focus areas that cover issues impacting the recruitment and hiring of people with disabilities and contain information on maximizing the benefit of using and managing the risks associated with assessing, acquiring and employing AI hiring technology.

    The 10 focus areas are:

    1. Identify Employment and Accessibility Legal Requirements
    2. Establish Roles, Responsibilities and Training
    3. Inventory and Classify the Technology
    4. Work with Responsible AI Vendors
    5. Assess Possible Positive and Negative Impacts
    6. Provide Accommodations
    7. Use Explainable AI and Provide Notices
    8. Ensure Effective Human Oversight
    9. Manage Incidents and Appeals
    10. Monitor Regularly

    Under each focus area, DOL and PEAT provide key practices and considerations for employers to implement as they work through the AI framework. It is important to note, however, that the framework does not have force of law and that employers do not need to implement every practice or goal for every focus area at once. The goal of the framework is to lead employers to inclusive practices involving AI technology over time.

    DOL encourages HR personnel — along with hiring managers, DEIA practitioners, and others — to familiarize themselves with the framework. CUPA-HR will keep members apprised of any future updates relating to the use of AI in hiring practices and technology.



    Source link

  • AI in Practice: Using ChatGPT to Create a Training Program

    AI in Practice: Using ChatGPT to Create a Training Program

    by Julie Burrell | September 24, 2024

    Like many HR professionals, Colorado Community College System’s Jennifer Parker was grappling with an increase in incivility on campus. She set about creating a civility training program that would be convenient and interactive. However, she faced a considerable hurdle: the challenges of creating a virtual training program from scratch, solo. Parker’s creative answer to one of these challenges — writing scripts for her under-10-minute videos — was to put ChatGPT to work for her. 

    How did she do it? This excerpt from her article, A Kinder Campus: Building an AI-Powered, Repeatable and Fun Civility Training Program, offers several tips.

    Using ChatGPT for Training and Professional Development

    I love using ChatGPT. It is such a great tool. Let me say that again: it’s such a great tool. I look at ChatGPT as a brainstorming partner. I don’t use it to write my scripts, but I do use it to get me started or to fix what I’ve written. I ask questions that I already know the answer to. I’m not using it for technical guidance in any way.

    What should you consider when you use ChatGPT for scriptwriting and training sessions?

    1. Make ChatGPT an expert. In my prompts, I often use the phrase, “Act like a subject matter expert on [a topic].” This helps define both the need and the audience for the information. If I’m looking for a list of reasons why people are uncivil on college campuses, I might prompt with, “Act like an HR director of a college campus and give me a list of ways employees are acting uncivil in the workplace.” Using the phrase above gives parameters on the types of answers ChatGPT will offer, as well as shape the perspective of the answers as for and about higher ed HR.
    2. Be specific about what you’re looking for. “I’m creating a training on active listening. This is for employees on a college campus. Create three scenarios in a classroom or office setting of employees acting unkind to each other. Also provide two solutions to those scenarios using active listening. Then, create a list of action steps I can use to teach employees how to actively listen based on these scenarios.” Being as specific as possible can help get you where you want to go. Once I get answers from ChatGPT, I can then decide if I need to change direction, start over or just get more ideas. There is no wrong step. It’s just you and your partner figuring things out.
    3. Sometimes ChatGPT can get stuck in a rut. It will start giving you the same or similar answers no matter how you reword things. My solution is to start a new conversation. I also change the prompt. Don’t be afraid to play around, to ask a million questions, or even tell ChatGPT it’s wrong. I often type something like, “That’s not what I’m looking for. You gave me a list of______, but what I need is ______. Please try again.” This helps the system to reset.
    4. Once I get close to what I want, I paste it all in another document, rewrite, and cite my sources. I use this document as an outline to rewrite it all in my own voice. I make sure it sounds like how I talk and write. This is key. No one wants to listen to ChatGPT’s voice. And I guarantee that people will know if you’re using its voice — it has a very conspicuous style. Once I’ve honed my script, I ensure that I find relevant sources to back the information up and cite the sources at the end of my documents, just in case I need to refer to them.

    What you’ll see here is an example of how I used ChatGPT to help me write the scripts for the micro-session on conflict. It’s an iterative but replicable process. I knew what the session would cover, but I wanted to brainstorm with ChatGPT.

    Once I’ve had multiple conversations with the chatbot, I go back through the entire script and pick out what I want to use. I make sure it’s in my own voice and then I’m ready to record. I also used ChatGPT to help with creating the activities and discussion questions in the rest of the micro-session.

    I know using ChatGPT can feel overwhelming but rest assured that you can’t really make a mistake. (And if you’re worried the machines are going to take over, throw in a “Thank you!” or “You’re awesome!” occasionally for appeasement’s sake.)

    About the author: Jennifer Parker is assistant director of HR operations at the Colorado Community College System.

    More Resources

    • Read Parker’s full article on creating a civility training program with help from AI.
    • Learn more about ChatGPT and other chatbots.
    • Explore CUPA-HR’s Civility in the Workplace Toolkit.



    Source link

  • DOL Issues Guidance on AI in the Workplace – CUPA-HR

    DOL Issues Guidance on AI in the Workplace – CUPA-HR

    by CUPA-HR | May 8, 2024

    On April 29, the Department of Labor Wage and Hour Division (WHD) issued a Field Assistance Bulletin on “Artificial Intelligence and Automated Systems in the Workplace Under the Fair Labor Standards Act and Other Federal Labor Standards.” The bulletin provides guidance on the applicability of the FLSA and other federal labor standards as they relate to employers’ increased use of artificial intelligence and automated systems in the workplace.

    Background

    In October 2023, President Biden released an Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and directed agencies across the federal government to take action to address the increased use of AI in all areas of life. With respect to AI in the workplace, the order directed the U.S. Secretary of Labor to “issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections to ensure that workers are compensated for their hours worked, as defined under the Fair Labor Standards Act (…) and other legal requirements.” The Field Assistance Bulletin is the first response from the DOL to the Executive Order’s directive, though additional guidance may be provided in the future.

    Summary of Guidance

    The bulletin discusses existing employer obligations to comply with and avoid penalties under relevant federal labor laws. It also clarifies that the use of AI and other technologies does not absolve employers of their responsibilities to comply with such laws. CUPA-HR’s government relations team has summarized the key points of the guidance below.

    AI and the FLSA

    The guidance highlights employers’ obligations to pay employees at least the federal minimum wage for all hours worked and at a rate of at least one and one-half times their regular rate of pay for every hour worked in excess of 40 in a single workweek. As such, WHD recognizes that employers have implemented AI and other automated systems to comply with these requirements, including implementing systems to help track work time, monitor break time, assign tasks to available workers, and monitor work locations. Additionally, WHD provides examples of AI and other technologies employers use to help calculate wages owed under the FLSA.

    WHD also recognizes that AI has the potential to undercount hours worked or miscalculate wage rates owed to employees. Regardless of the use of AI, WHD states in its guidance that “employers are responsible for ensuring that they are paying employees for all hours worked” under the FLSA and that “employers are responsible for ensuring that the use of AI or other technologies to calculate and determine workers’ wage rates does not cause workers to be paid in violation of” the FLSA and other applicable federal wage standards. As such, WHD suggests that employers exercise human oversight over the technologies to ensure they are not violating the FLSA.

    AI and the Family and Medical Leave Act

    Similar to WHD’s discussion of employers’ obligations to adhere to the requirements of the FLSA, the bulletin provides guidance on employers’ responsibilities to adhere to the requirements of providing Family and Medical Leave Act leave when using AI and other automated systems. WHD once again recognizes that some employers use AI and other tools to process leave requests, determine whether an employee has provided proper certification that supports the need for FMLA leave, or track the use of FMLA leave. As a result, WHD states that employers should oversee the use of AI or automated systems used to implement FMLA leave “to avoid the risk of widespread violations of FMLA rights when eligibility, certification, and anti-retaliation and anti-interference requirements are not complied with.”

    AI and Nursing Employee Protections

    WHD also provides guidance for employers’ use of AI as it relates to nursing employees’ rights to reasonable break time and space to express breast milk while at work, as protected under the FLSA and the Providing Urgent Maternal Protections for Nursing Mothers Act (PUMP Act). The bulletin states that, though employers may use AI to track employee work hours, set work schedules, and manage break time requests, any instance in which automated systems “limit the length, frequency, or timing of a nursing employee’s breaks to pump would violate the FLSA’s reasonable break time requirement.” The guidance also states that systems that score productivity and/or penalize workers for failing to meet productivity standards due to pump breaks would violate the FLSA. Finally, they clarify that automated systems that require nursing employees to work additional hours to make up for time spent during pump breaks or that reduce the hours scheduled in the future for workers because they took pump breaks would be considered “unlawful retaliation” under the FLSA. WHD therefore provides that “employers are responsible for ensuring that AI or other automated systems do not impose adverse actions on employees for exercising their rights to pump at work.”

    AI and the Employee Polygraph Protection Act

    The bulletin provides an overview of the Employee Polygraph Protection Act (EPPA) and most private employers’ prohibition from using lie detector tests on employees or for pre-employment screenings. In light of this law, WHD recognizes that AI technologies have been developed to “use eye measurements, voice analysis, micro-expressions, or other body movements to suggest if someone is lying or detect deception.” As such, WHD reaffirms that EPPA prohibits covered private employers from using AI technology as a lie detector test.

    AI and Prohibited Retaliation

    Finally, the bulletin covers protections against retaliatory conduct provided under the FLSA and other laws administered by WHD to employees who have filed complaints about potential violations of their rights. As a result of these protections, WHD states that “the use of AI and other technologies by employers to take adverse action against workers for engaging in protected activities under one or more laws enforced by WHD constitutes unlawful retaliation.” Additionally, WHD clarifies that the use of AI to surveil the workforce for protected activity and to take adverse actions could violate anti-retaliation protections under the FLSA and other laws. As such, WHD reminds employers in the guidance that they are responsible for compliance with anti-retaliation provisions regardless of whether they incorporate AI technology into their business practices.

    CUPA-HR will continue to monitor for additional guidance from federal agencies as it relates to the use of AI in the workplace.



    Source link

  • The Keyword Generator from the University of Louisiana at Lafayette Library

    The Keyword Generator from the University of Louisiana at Lafayette Library

    Research and Writing Tool: The Keyword Generator from the University of Louisiana at Lafayette Library

    The University of Louisiana at Lafayette has an incredible tool for students and faculty  writing papers. Sometimes, I have a difficult time brainstorming ideas for a particular research topic (I know that I cannot be the only person), In the past few weeks, I stumbled upon a database quick search tool from U of L Lafayette, “https://www.millennialprofessor.com/2024/04/Keyword Generator“. 

    This is a ChatGPT prompt and it can help you develop keywords for your topic!

    Basically, you place your search terms or your associated keywords and the system provides some associated terms for you.

    Step 1 – I searched for “rural education”

    Step 2 – The keyword generator developed these keywords for me:


    1. Rural schools 

    2. Access to education in rural areas 

    3. Challenges in rural education 

    4. Remote learning in rural communities 

    5. Educational equity in rural schools 

    6. Technology in rural education 

    7. Teacher recruitment and retention in rural areas 

    8. School infrastructure in rural communities 

    9. Dropout rates in rural schools 

    10. Special education in rural areas 

    11. Community involvement in rural education 

    12. Funding for rural schools 

    13. Curriculum development for rural schools 

    14. Impact of poverty on rural education 

    15. Student achievement in rural schools 

    16. Distance education in rural areas 

    17. Cultural diversity in rural schools 

    18. Professional development for rural educators 

    19. Government policies affecting rural education 

    20. Best practices in rural education.

    Step 3 – After you view your results, you can copy and paste the results AND you can utilize ChatGPT without having to log in!

    What do you think about this tool? I will definitely use it! Thanks University of Louisiana at Lafayette!

    ***

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Embracing the Future of HR: Your AI Questions Answered – CUPA-HR

    Embracing the Future of HR: Your AI Questions Answered – CUPA-HR

    by Julie Burrell | April 16, 2024

    In his recent webinar for CUPA-HR, Rahul Thadani, senior executive director of HR information systems at the University of Alabama at Birmingham, answered some of the most frequently raised questions about AI in HR. He also spoke to the most prevalent worries, including concerns about data privacy and whether AI will compete with humans for jobs.

    In addition to covering the basics on AI and how it works, Thadani addressed questions about the risks and rewards of using AI in HR, including:

    • How can AI speed up productivity now?
    • What AI tools should HR be using?
    • How well is AI integrated into enterprise software?
    • What are the risks and downsides of using AI?
    • What role will AI play in the future of HR?

    Thadani also put to rest a common fear about AI: that it will replace human jobs. He believes that HR is too complex, too fundamentally human a role to be automated. AI only simulates human intelligence, but it can’t make human decisions. Thadani reminded HR pros, “you all know how complex humans are, how complex decision-making is for humans.” AI can’t understand “the many components that go into hiring somebody,” for example, or how to measure employee engagement.

    AI won’t replace skilled HR professionals, but HR can’t afford to ignore AI. Thadani and other AI leaders stress that HR has a critical role to play in how AI is used on campuses. As the people experts, HR must have a seat at the table in AI discussions, partnering with IT and leadership on decisions such as how employees’ data are used and which AI software to test and purchase.

    Take the First Step

    Most people are just getting started on their AI journey. As a first step for those new to AI, Thadani recommends signing up for a ChatGPT account or another chatbot, like Google’s Gemini. He suggests using your private email account in case you need to sign a privacy agreement that doesn’t align with your institution’s policies. Test out what these chatbots are capable of by using this quick guide to chatbots.

    For leaders and supervisors, Thadani proposes having ongoing conversations within your department, on your campus and with your leadership. Some questions to consider in these conversations: Does your campus have an AI governance council? If so, is HR taking part? Do you have internal AI guidelines in place to protect data and privacy, in your department or for your campus? If not, do you have a plan to develop them? (As a leader in the AI space, the University of Michigan has AI guidelines that provide a good model, and are broken down into staff, faculty and student guidance categories.) Have you identified thought leaders in AI in your office or on your campus who can spur discussions and recommend best practices?

    In HR, “there’s definitely an eagerness to be ready and be ahead of the curve” when it comes to AI, Thadani noted. AI will undoubtedly be central to the future of work, and it’s up to HR to proactively guide how AI can be leveraged in ethical and responsible ways.

    HR-Specific Resources on AI



    Source link

  • A Game Changing App for Faculty Researchers!

    A Game Changing App for Faculty Researchers!

    Consensus – A Game Changing App for Faculty Researchers

    Today, I started to utilize a new AI app for my research. This app, Consensus, is a game changer for faculty researchers. I wish that I had this app in graduate school – it would have definitely made life easier!

    Step 1 – Here are some screen shots of the software. You can type a question in the box (yes, a question) and the system does the work. Yes, the work that you would usually have to do!

    Step 2 – Then, AI does the rest. You receive AI-powered answers for your results. Consensus analyzes your results (before you even view them) and then summarizes the studies collectively.

    Step 3 – You can view the AI-powered answers which review each article for you.

    *I would also encourage you to review the article independently as well.

    Step 4 – View the study snapshots! Yes, a snapshot of the population, sample size, methods, outcomes measured, and more! Absolutely amazing!

    Step 5 – Click the “AI Synthesis” button to synthesize your results. Even better!

    Step 6 – Use the “powerful filters” button. You can view the “best” research results by: a) population, b) sample size, c) study design, d) journal quality, and other variables. 

    I plan to make a video soon, but please take a look at this video to discover exactly how Consensus can help you in your research! 

    ***

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Three Essential AI Tools and Practical Tips for Automating HR Tasks – CUPA-HR

    Three Essential AI Tools and Practical Tips for Automating HR Tasks – CUPA-HR

    by Julie Burrell | March 27, 2024

    During his recent keynote at CUPA-HR’s Higher Ed HR Accelerator, Commissioner Keith Sonderling of the Equal Opportunity Employment Commission observed, “now, AI exists in HR in every single stage of employment,” from writing job descriptions, to sourcing candidates and scheduling interviews, and well into the career lifecycle of employees.

    At some colleges and universities, AI is now a routine part of the HR workflow. At the University of North Texas at Dallas, for example, AI has significantly sped up the recruitment and hiring timeline. “It helped me staff a unit in an aggressive time frame,” says Tony Sanchez, chief human resources officer, who stresses that they use AI software with privacy protections. “AI parsed resumes, prescreened applicants, and allowed scheduling directly to the hiring manager’s calendar.”

    Even as AI literacy is becoming a critical skill, many institutions of higher education have not yet adopted AI as a part of their daily operations. But even if you don’t have your own custom AI like The University of Michigan, free AI tools can still be a powerful daily assistant. With some common-sense guardrails in place, AI can help you automate repetitive tasks, make software like Excel easier to use, analyze information and polish your writing.

    Three Free Chatbots to Use Now

    AI development is moving at a breakneck pace, which means that even the freely available tools below are more useful than they were just a few months ago. Try experimenting with multiple AI chatbots by having different browser windows open and asking each chatbot to do the same task. Just don’t pick a favorite yet. With AI companies constantly trying to outperform each other, one might work better depending on the day or the task. And before you start, be sure to read the section on AI guardrails below — you never want to input proprietary or private information into a public chatbot.

    ChatGPT, the AI trailblazer. The free version allows unlimited chats after signing up for an account. Right now, ChatGPT is text-based, which means it can help you with emails and communications, or even draft longer materials like reports. It can also solve math problems and answer questions (but beware of fabricated answers).

    You can customize ChatGPT to make it work better for you by clicking on your username in the bottom lefthand corner. For example, you can tell it that you’re an HR professional working in higher education, and it will tailor its responses to what it knows about your job.

    Google’s powerful AI chatbot, Gemini (formerly known as Bard). You’ll need to have or sign up for a free Google account, and it’s well worth it. Gemini can understand and interact with text just like ChatGPT does, but it’s also multimodal. You can drag and drop images and it will be able to interpret them. Gemini can also make tables, which can be exported to Google Sheets. And it generates images for free. For example, if you have an image you want your marketing team to design, you can get started by asking Gemini to create what you have in mind. But for now, Gemini won’t create images of people.

    Claude, often considered the best AI writer. Take Claude for a spin by asking it to write a job description or memo for you. Be warned that the free version of Claude has a daily usage limit, and you won’t know you’ve hit it until you hit it. According to Claude, your daily limit depends on demand, and your quota resets every morning.

    These free AI tools aren’t as powerful as their paid counterparts — all about $20 per month — but they do offer a sense of what AI can do.

    Practical Tips for Using AI in HR 

    For a recent Higher Ed HR Magazine article, I asked higher education HR professionals how they used AI to increase efficiency. Rhonda Beassie, associate vice president for people and procurement operations at Sam Houston State University, shared that she and her team are using AI for both increased productivity and upskilling, such as:

    • Creating first drafts of and benchmarking job descriptions.
    • Making flyers, announcements and other employee communications.
    • Designing training presentations, including images, text, flow and timing.
    • Training employees for deeper use of common software applications.
    • Providing instructions on developing and troubleshooting questions for macros and VLOOKUP in Microsoft Excel.
    • Troubleshooting software. Beassie noted that employees “can simply say to the AI, ‘I received an error message of X. How do I need to change the script to correct this?’ and options are provided.”
    • Creating reports pulled from their enterprise system.

    AI chatbots are also great at:

    • Being a thought partner. Ask a chatbot to help you respond to a tricky email, to find the flaws in your argument or to point out things you’ve missed in a piece of writing.
    • Revising the tone, formality or length of writing. You can ask chatbots to make something more or less formal or friendly (or whatever tone you’re trying to strike), remove the jargon from a piece of writing, or lengthen or shorten something.
    • Summarizing webpages, articles or book chapters. You can cut and paste a URL into a chatbot and ask it to summarize the page for you. You can also cut and paste a fairly large amount of text into chatbots and ask it for a summary. Try using parameters, such as “Summarize this into one sentence,” or “Please give me a bulleted list of the main takeaways.” The summaries aren’t always perfect, but will usually do in a pinch.
    • Summarizing YouTube videos. (Currently, the only free tool that can do this is Gemini.) Just cut and paste in the URL and ask it to summarize a video for you. Likewise, these summaries aren’t always exactly accurate.
    • Writing in your voice. Ask a chatbot to learn your voice and style by entering in things you’ve written. Ask it to compose a communication, like a memo or email you need to write, in your voice. This takes some time up front to train the AI, and it may not remember your voice from day-to-day or task-to-task.

    Practice Your Prompts

    Just 10 minutes a day can take you far in getting comfortable with these tools if you’re new to them. Learning prompting, which may take an upfront investment of more time, can unlock powerful capabilities in AI tools. The more complex the task you ask AI to do, the more time you need to spend crafting a prompt.

    The best prompts will ask a chatbot to assume a role and perform an action, using specific context. For example, “You are a human resources professional at a small, liberal arts college. You are writing a job description for an HR generalist. The position’s responsibilities include leading safety and compliance training; assisting with payroll; conducting background checks; troubleshooting employee questions in person and virtually. The qualifications for the job are one to two years in an HR office, preferably in higher education, and a BA.”

    Anthropic has provided a very helpful prompt library for Claude, which will also work with most AI chatbots.

    AI Guardrails

    There are real risks to using AI, especially the free tools listed above. You can read about them in detail here, or even ask AI to tell you, but the major dangers are:

    • Freely available AI will not protect your data privacy. Unless you have internal or enterprise software with a privacy agreement at your institution, assume everything you share with AI is public. Protected or confidential information should not be entered into a prompt.
    • AI fabricates, or hallucinates, as it’s sometimes called. It will make up facts that sound deceptively plausible. If you need accurate information, it’s best to consult an expert or trusted sources.
    • You don’t own copyright on AI-created work. In the United States, only human-produced work can be copyrighted.
    • Most of these tools are trained only up to a certain date, often a year or more ago for free chatbots. If you need up-to-the-minute information, use your favorite web browser.

    Further AI Resources



    Source link

  • Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    As we prepare for an upcoming year, I have to stop and think about the future of higher education. The pandemic changed our students, faculty, staff, and our campus as a whole. The Education Advisory Board (EAB) provides colleges and universities across the country with resources and ideas to help the students of the future.

    I confess, I have been a complete fan of EAB and their resources for the past ten years. Their resources are at the forefront of higher education innovation.

    🏛 – Dining Halls and Food Spaces

    🏛 – Modern Student Housing

    🏛 – Hybrid and Flexible Office Spaces

    🏛 – Tech-Enabled Classrooms

    🏛 – Libraries and Learning Commons

    🏛 – Interdisciplinary Research Facilities


    Higher education institutions should also focus on the faculty and staff as well. When I ask most of my peers if they are comfortable with the numerous changes happening across their institution, most of them are uncomfortable. We need to prepare our teams for the future of higher education. 

    Here’s the Millennial Professor’s Call the Action Statements for the Higher Education Industry

    🌎 – Higher Education Conferences and Summits Need to Provide Trainings Focused on Artificial Intelligence (AI) for Their Attendees

    🌎 – Higher Education Institutions Need to Include Faculty and Staff as Part of Their Planning Process (an Important Part)

    🌎 – Higher Education Institutions Provide Wellness and Holistic Support for Faculty and Staff Who are Having Problems With Change (You Need Us and We Need Help)

    🌎 – Higher Education Institutions Need to Be Comfortable with Uncommon Spaces (Flexible Office Spaces)

    🌎 – Faculty Need to Embrace Collaboration Opportunities with Faculty at Their Institutions and Other Institutions

    Here are some additional articles about the future of higher education:

    Higher education will continue to transition in an effort to meet the needs of our current and incoming students. 

    For our particular university, we are striving to modify all of these items simultaneously. It is a challenge, but the changes are well worth the journey.

    Here’s the challenge for this post: “In your opinion, which one of the items on the list is MOST important for your institution?”

    ***. 

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    by CUPA-HR | November 15, 2023

    A growing interest in artificial intelligence and its potential impact on the workforce has sparked action by policymakers at the federal level. As employers increasingly turn to AI to fill workforce gaps, as well as improve hiring and overall job quality, policymakers are seeking federal policies to better understand the use and development of the technology. Recent policies include an executive order from the Biden administration and a Senate committee hearing on AI, both of which are detailed below.

    Executive Order on AI Use and Deployment

    On October 30, the Biden Administration released an executive order delineating the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order urges responsible AI deployment that satisfies workforce development needs and ethical considerations.

    The executive order directs several agency heads to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Some orders of particular interest to higher education HR include:

    • The secretary of labor is directed to submit a report analyzing ways agencies can support workers who may be displaced by AI.
    • The secretaries of labor, education and commerce are directed to expand education and training opportunities to provide pathways to careers related to AI.
    • The secretary of labor is ordered to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.
    • The secretary of labor is directed to issue guidance clarifying that employers using AI to monitor employees’ work are required to comply with protections that ensure workers are compensated for hours worked as defined under the Fair Labor Standards Act.
    • The secretary of labor is directed to publish guidance for federal contractors on nondiscrimination in hiring practices that involve the use of AI and other technology.
    • The director of the National Science Foundation is directed to “prioritize available resources to support AI-related education and AI-related workforce development through existing programs.”
    • The secretary of education is ordered to develop resources and guidance regarding AI, including resources addressing “safe, responsible and nondiscriminatory uses of AI in education.”
    • The secretary of state is ordered to establish a program to “identify and attract top talent in AI and other critical and emerging technologies at universities [and] research institutions” and “to increase connections with that talent to educate them on opportunities and resources for research and employment in the United States.”
    • The secretary of homeland security is directed to continue its rulemaking process to modernize the H-1B program and to consider a rulemaking that would ease the process of adjusting noncitizens’ status to lawful permanent resident status if they are experts in AI and other emerging technologies.

    The executive order directs the agency heads to produce their respective guidance and resources within the next year. As these policies and resources begin to roll out, CUPA-HR will keep members updated on any new obligations or requirements related to AI.

    Senate HELP Committee Hearing on AI and the Future of Work

    On October 31, 2023 the Senate Employment and Workplace Safety Subcommittee held a hearing titled “AI and the Future of Work: Moving Forward Together.” The hearing provided policymakers and witnesses the opportunity to discuss the use of AI as a complementary tool in the workforce to skill and reskill American workers and help them remain a valuable asset to the labor market.

    Democrats and Republicans on the committee agreed that AI has the potential to alter the workforce in positive ways but that the growth of the use of the technology needs to be supported by a framework of regulations that do not smother its potential. According to witnesses, employers using AI currently face a patchwork of state and local laws that complicate the responsible use and growth of AI technologies. They argued that a federal framework to address the safe, responsible use of AI could help employers avoid such complications and allow AI use to continue to grow.

    Democrats on the committee also asked whether education opportunities and skills-based training on AI can help provide an employment pathway for workers. Witnesses argued that AI education is needed at the elementary and secondary level to ensure future workers are equipped with the skills needed to work with AI, and that skills-based training models to reskill workers have proven successful.

    CUPA-HR will continue to track any developments in federal AI regulations and programs and will inform members of updates.



    Source link